Unable to send newly updated events form filebeat to logstash - logstash

filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\Office\Work\processed_fileChaser_2020-05-22_12_56.log
close_eof: true
close_inactive: 10s
scan_frequency: 15s
tail_files: true
output.logstash:
# The Logstash hosts
hosts: ["XX.XXX.XXX.XX:5044"]
------------------------ logsatsh config -----------------------------------
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => ["tags", "ecs", "log", "agent", "message", "input"]
}
}
output {
file {
path => "C:/Softwares/logstash-7.5.2/output/dm_filechaser.json"
}
}
Sample input:
{"fileName": "DM.YYYYMMDD.TXT", "country": "TW", "type": "TW ", "app_code": "DM", "upstream": "XXXX", "insertionTime": "2020-05-20T13:01:04+08:00"}
{"fileName": "TRANSACTION.YYYYMMDD.TXT", "country": "TW", "type": "TW", "app_code": "DM", "upstream": "XXXX", "insertionTime": "2020-05-22T13:01:04+08:00"} ```
Versions:
Logstash: 7.5.2
Filebeat: 7.5.2
filebeat unable to send new data and always sends same data to logstash from file beat.
Can you please help how to send newly updated data from log to logsatsh.
Filebeat logs:
2020-05-22T15:39:30.630+0530 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 10.10.242.48:53400->10.10.242.48:5044: i/o timeout
2020-05-22T15:39:30.632+0530 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-05-22T15:39:32.605+0530 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-05-22T15:39:32.605+0530 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://10.10.242.48:5044))
2020-05-22T15:39:32.606+0530 INFO pipeline/output.go:105 Connection to backoff(async(tcp://10.10.242.48:5044)) established
2020-05-22T15:39:44.588+0530 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1203,"time":{"ms":32}},"total":{"ticks":1890,"time":{"ms":48},"value":1890},"user":{"ticks":687,"time":{"ms":16}}},"handles":{"open":280},"info":{"ephemeral_id":"970ad9cc-16a8-4bf1-85ed-4dd774f0df42","uptime":{"ms":61869}},"memstats":{"gc_next":11787664,"memory_alloc":8709864,"memory_total":16065440,"rss":2510848},"runtime":{"goroutines":26}},"filebeat":{"events":{"active":2,"added":2},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"batches":2,"failed":4,"total":4},"read":{"errors":1},"write":{"bytes":730}},"pipeline":{"clients":1,"events":{"active":2,"filtered":2,"retry":6,"total":2}}},"registrar":{"states":{"current":1}}}}}
2020-05-22T15:39:44.657+0530 INFO log/harvester.go:251 Harvester started for file: C:\Office\Work\processed_fileChaser_2020-05-22_12_56.log

Related

Tags in logstash

How to direct postfix logs to index postfix ?
In logstash config
input {
beats {
port => 5044
}
}
filter {
grok {
}
}
output {
if "postfix" in [tags]{
elasticsearch {
hosts => "localhost:9200"
index => "postfix-%{+YYYY.MM.dd}"
}
}
}
In filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/maillog*
exclude_files: [".gz$"]
tags: ["postfix"]
output.logstash:
hosts: ["10.50.11.8:5044"]
In the log logstash a lot
[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"newrelicdata", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x4ceb504a>], :response=>{"index"=>{"_index"=>"newrelicdata", "_type"=>"_doc", "_id"=>"V7x2z20Bp3jq-MOqpNbt", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'V7x2z20Bp3jq-MOqpNbt'. Preview of field's value: '{name=mail.domain.com}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:521"}}}}}
Why date from mail.domain.com try to get not in index postfix ? And the data is trying to get into all the indexes ? Any help
I think the logs contain field name "host", so it's throwing an error as "failed to parse field [host] of type [text]"
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
As I tried sending the logs with the below filebeat.yml configurations,
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/varsha/ELK7.4/logs/*.log
tags: ["postfix"]
output.logstash:
hosts: ["localhost:5044"]
By the above configurations and the files you have shared with me are working fine and got the expected result, please check it in https://pastebin.com/f6e0E52S

Filebeat fails to connect to logstash

I'm using two servers on the cloud on one server (A) I installed filebeat and on second server (B) I have installed logstash, elasticsearch, and kibana. So I'm facing problem while sending logs from server A to server B on logstash.
My filebeat configuration is
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/vinit/demo/*.log
fields:
log_type: apache
fields_under_root: true
#output.elasticsearch:
#hosts: ["localhost:9200"]
#protocol: "https"
#username: "elastic"
#password: "changeme"
output.logstash:
hosts: ["XXX.XX.X.XXX:5044"]
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#ssl.certificate: "/etc/pki/client/cert.pem"
#ssl.key: "/etc/pki/client/cert.key"
In logstash, I have enabled modules system, filebeat, and logstash.
Logstash configuration is
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "^%{IP:CLIENT_IP} (?:-|%{USER:IDEN}) (?:-|%{USER:AUTH}) \[%{HTTPDATE:CREATED_ON}\] \"(?:%{WORD:REQUEST_METHOD} (?:/|%{NOTSPACE:REQUEST})(?: HTT$
add_field => {
"LOG_TYPES" => "apache-log"
}
overwrite => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "apache-info-log"
}
stdout { codec => rubydebug }
}
In Elasticsearch I did
network.host: localhost
I'm getting error are below-
|2019-01-18T15:05:47.738Z|INFO|crawler/crawler.go:72|Loading Inputs: 1|
|---|---|---|---|
|2019-01-18T15:05:47.739Z|INFO|log/input.go:138|Configured paths: [/home/vinit/demo/*.log]|
|2019-01-18T15:05:47.739Z|INFO|input/input.go:114|Starting input of type: log; ID: 10340820847180584185 |
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.742Z|INFO|log/harvester.go:254|Harvester started for file: /home/vinit/demo/info-log.log|
|2019-01-18T15:05:47.749Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.763Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.763Z|INFO|crawler/crawler.go:106|Loading and starting Inputs completed. Enabled inputs: 1|
|2019-01-18T15:05:47.763Z|INFO|cfgfile/reload.go:150|Config reloader started|
|2019-01-18T15:05:47.777Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.790Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 15514736912311113705 |
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 4004097261679848995 |
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 2251543969305657601 |
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 9013300092125558684 |
|2019-01-18T15:05:47.791Z|INFO|cfgfile/reload.go:205|Loading of config files completed.|
|2019-01-18T15:05:47.792Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181223|
|2019-01-18T15:05:47.794Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181223|
|2019-01-18T15:05:47.797Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181230|
|2019-01-18T15:05:47.800Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181230|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190106|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190113|
|2019-01-18T15:05:47.816Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190106|
|2019-01-18T15:05:47.817Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages|
|2019-01-18T15:05:47.818Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190113|
|2019-01-18T15:05:47.855Z|INFO|pipeline/output.go:95|Connecting to backoff(async(tcp://XXX.XX.X.XXX:5044))|
|2019-01-18T15:06:18.855Z|ERROR|pipeline/output.go:100|Failed to connect to backoff(async(tcp://XXX.XX.X.XXX:5044)): dial tcp XXX.XX.X.XXX:5044: i/o timeout|
|---|---|---|---|
|2019-01-18T15:06:18.855Z|INFO|pipeline/output.go:93|Attempting to reconnect to backoff(async(tcp://XXX.XX.X.XXX:5044)) with 1 reconnect attempt(s)|
Is anyone have any idea how to resolve this and make it work properly?
Related question is Failed to connect to backoff(async(tcp://ip:5044)): dial tcp ip:5044: i/o timeout.
There in the answer it was proposed to allow outgoing TCP connection on port 5044 directly in your cloud provider settings page, since it may be blocked by default.
In addition to present comments by #Vinit Jordan, who whitelisted port 5044 on EC2 with this steps, I propose possible solution for general case.
Please, check your default firewall on logstash server. Probably you have ufw simple firewall that was preconfigured during initial Nginx setup. I ran into this problem right after installation of ELK on the machine B and filebeat on the machine A.
I just added a new rule for filebeat server firewall and the error disappeared:
sudo ufw allow from <IP_address_of_machine_A> to any port 5044
Then filbeat log on machine A showed me:
"message":"Connection to backoff(async(tcp://<IP_address_of_machine_B>:5044)) established"
Probably it is also reasonable to add more general rule for your trusted servers:
sudo ufw allow from <IP_ADDRESS>

Parsing json using logstash (ELK stack)

I have created a simple json like below
[
{
"Name": "vishnu",
"ID": 1
},
{
"Name": "vishnu",
"ID": 1
}
]
I am holding this values in file named simple.txt . Then i used file beat to listen the file and send the new updates to port 5043,on other side i started the log-stash service which listen to this port in order to parse and pass the json to elastic search.
log-stash is not processing the json values,it hangs in the middle.
logstash
input {
beats {
port => 5043
host => "0.0.0.0"
client_inactivity_timeout => 3600
}
}
filter {
json {
source => "message"
}
}
output {
stdout { codec => rubydebug }
}
filebeat config:
filebeat.prospectors:
- input_type: log
paths:
- filepath
output.logstash:
hosts: ["localhost:5043"]
Logstash output
**
Sending Logstash's logs to D:/elasticdb/logstash-5.6.3/logstash-5.6.3/logs which is now configured via log4j2.properties
[2017-10-31T19:01:17,574][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/fb_apache/configuration"}
[2017-10-31T19:01:17,578][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/elasticdb/logstash-5.6.3/logstash-5.6.3/modules/netflow/configuration"}
[2017-10-31T19:01:18,301][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-10-31T19:01:18,388][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5043"}
[2017-10-31T19:01:18,573][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-31T19:01:18,591][INFO ][org.logstash.beats.Server] Starting server on port: 5043
[2017-10-31T19:01:18,697][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
**
Every time when i am running log-stash using command
logstash -f logstash.conf
And since there is no processing of json i am stopping that service by pressing ctrl + c .
Please help me in finding the solution.Thanks in advance.
finally i got ended up with config like this.It works for me.
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => "D:\elasticdb\logstash-tutorial.log\Test.txt"
start_position => "beginning"
sincedb_path => "D:\elasticdb\logstash-tutorial.log\null"
exclude => "*.gz"
}
}
filter {
json {
source => "message"
remove_field => ["path","#timestamp","#version","host","message"]
}
}
output {
elasticsearch { hosts => ["localhost"]
index => "logs"
"document_type" => "json_from_logstash_attempt3"
}
stdout{}
}
Json format:
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}
{"name":"sachin","ID":"1","TS":1351146569}

Filebeat to Logstash ERR wsarecv, wsasend

I am using ELK stack version 5.1.2 and I have problem with sending logs from one worker (node) to central server. Everything I configured on localhost and it worked perfectly, but on development environment not. On localhost I used SSL, but now I turned it off. So my conf file of filebeat is:
filebeat.prospectors:
- input_type: log
paths:
- e:\logs\*.log
document_type: xxx_log
output.logstash:
hosts: ["xxxx:5043"]
logging.level: error
logging.to_syslog: true
logging.files:
rotateeverybytes: 10485760 # = 10MB
Logstash configuration:
input {
beats {
port => "5043"
}
}
filter {
if [type] == "xxx_log" {
multiline {
pattern => "^TID"
negate => true
what => "previous"
}
grok {
break_on_match => false
match => [ "message", "TID: \[%{TIMESTAMP_ISO8601:timestamp}\] %{LOGLEVEL:level} \[%{JAVACLASS:java_class}\] \(%{GREEDYDATA:thread}\) - (?<log_message>(.|\r|\n)*)"]
}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
user => "elastic"
password => "changeme"
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Ok, when I add line to log file, for example:
TID: [2017-01-19 13:37:18] INFO [App.java] (main) - Info test...
Filebeat starts to collect data, after successfull harvest I am getting:
ERR Failed to publish events caused by: write tcp yyyy:51992->xxxx:5043: wsasend: An existing connection was forcibly closed by the remote host.
Nothing in log of Logstash.
Firewall is turned off, when I open telnet from WORK node on port 5043 message will come to central server because Logstash say in log file, that I send invalid frame type, for example I send only some POST to test if port 5043 is open. So the port is open, but the elastic is empty. Sometimes, I do not know why, I am getting error in Filebeat log:
wsarecv: An existing connection was forcibly closed by the remote host.
This line generates Logstash log:
11:45:31.094 [nioEventLoopGroup-4-2] ERROR org.logstash.beats.BeatsHandler - Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 83
13:31:43.139 [nioEventLoopGroup-4-4] ERROR org.logstash.beats.BeatsHandler - Exception: An existing connection was forcibly closed by the remote host
Thank you for any advice.
Jaroslav

ELK not passing metadata from filebeat into logstash

Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening.
My filebeat configuration is as follows:
filebeat:
prospectors:
-
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/secure
encoding: utf-8
input_type: log
timeout: 30s
idle_timeout: 30s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["my_elk_fqdn:5044"]
bulk_max_size: 1024
compression_level: 3
worker: 1
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
to_files: true
files:
path: /var/log/filebeat
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: debug
The log file output I keep getting from filebeat is just not very helpful:
2016-07-14T17:32:21-04:00 DBG Start next scan
2016-07-14T17:32:31-04:00 DBG Start next scan
2016-07-14T17:32:41-04:00 DBG Start next scan
2016-07-14T17:32:46-04:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-07-14T17:32:51-04:00 DBG Start next scan
Is there anything wrong with my configuration file?
When I test on the ELK server to see if I am getting anything:
[root#my_elk_server ~]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
Oh and my logstash configuration for filebeats:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
UPDATE: It is not filebeat. Somewhat relieved that messages are indeed being passed but still have an issue I can't track:
Discovered it wasn't filebeat that was causing the issue. It appears that the configuration file in logstash to send to elasticsearch is not properly labeling the index (and the type) to make it searchable as shown in the question. Instead of putting filebeat in the index name it gives a result like this:
"_index" : "%{[#metadata][beat]}-2016.07.14",
The elasticsearch output put in the file turned out to be incorrect in the
output {
elasticsearch {
hosts => "my_elk_fqdn:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Apparently this #metadata is not being passed in correctly. Has anyone been able to get the _index and _type fields to populate correctly???
This might be a bug with filebeat??
https://github.com/logstash-plugins/logstash-input-beats/issues/6

Resources