logstash issue on starting (in kubernetes pod) - logstash

i start a logstash service in a container, via a kubernetes Pod,
and i volontary changed the config to see the behaviour.
Now, the config file of the service is not
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
but
output {
elasticsearch {
hosts => ["elasticsearch1:9200"]
}
Then, when i start the container, the logs are :
[2023-01-13T14:26:49,474][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch1:9200/]}}
[2023-01-13T14:26:50,076][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch1:9200/"}
so refer to elasticsearch1
and then suddenly refer to elasticsearch
[2023-01-13T14:27:01,405][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
why ?

In addition to any elasticsearch instances defined in plugins, logstash will try to connect to the elasticsearch instance defined by xpack.monitoring.elasticsearch.hosts in logstash.yml in order to check what licence you are running with.

Related

How to transfer logs from different VM’s (Digitalocen) to a single Logstash using Filebeat

From Last two days i got stuck in this task.Task is there are multiple Ubuntu machines are running on cloud (DigitalOcen), I have to take the logs of those machines and ship those logs to Logstash where complete ELK is configured.
I have configured filebeat in one system and my filebeat.yml is like below:
filebeat.prospectors:
type: log
paths:
/var/log/nginx/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: true
output.logstash:
hosts: ["206.189.129.234:5044"]
Logstash:-
And my simple logstash.conf file is like Below
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "206.189.129.234:9200"
manage_template => false
index => "nginx-%{+YYYY.MM.dd}"
}
}
when i start the logstash it is running successfully but I am not able to see any index in elasticsearch. I had tried multiple ways but no results can anyone help me Out of this.
And Is there any particular process is there for above scenario..
Thanks in advance....
First of all can you ping from filebeat host, logstash host and from logstash host elasticsearch host?
Then can you check if ports are open and listening?
If everything works fine, try to put
/var/log/nginx/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: true
output.logstash:
hosts: ["HOSTANAME:5044"]
and in your logstash pipeline:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "http://206.189.129.234:9200" (or https, elastic needs to know which protocol you are using)
manage_template => false
index => "nginx-%{+YYYY.MM.dd}"
}
}

logstash conf - log line starting with [ and ending with ] not shown in kibana

recently I ve been working with filebeat -> logstash -> elasticsearch -> kibana setup
I have 2 servers:
1. server running cca 20 containers (each producing different logs) and filebeat
2. server running logstash, elasticsearch and kibana
circa 9 of my containers have logs starting with "[" and anding with "]" and all those containers are not shown in kibana. I assume that the brackets are the reason why my logstash config does not accept the message and hence the log line is not shown in kibana???
when I restart container it produces log message, something like "Container started" - this log line dodes not start/end with [ or ] -> This line was shown in kibana correctly. So most likely the brackets are the issue.
Please could you help me to setup my logstash conf file so it does accept/send logs starting/ending with [ ] to kibana ? Please if you can help, share exact config text as I am not very skilled with the syntax
I am sending you my logstash config as it is looking now:
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
grok {
match => { "message" => "%{IP:client} \<%{IP:hostAddress}\> - - \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{DATA:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} %{NUMBER:bytes}" }
}
}
output {
elasticsearch {
hosts => localhost
}
}
Here I am adding example of log lined produced by one of the containers:
[2018-11-29 18:12:54,322: INFO/MainProcess] Connected to amqp://guest:**#rabbitmq:5672//
[2018-11-29 18:12:54,335: INFO/MainProcess] mingle: searching for neighbors
[2018-11-29 18:12:55,431: INFO/MainProcess] mingle: sync with 1 nodes
My config is now setup mainly for nginx container, if you have time you can also help me to create config for the above log. But in this post I mainly want to know how to process the [ and ] which are in the log so it is send to kibana.
Thank you guys very much in advance, I hope I will find help here as I am pretty lost with that. Thank you

Issue while sending file content from filebeat to logstash

I am new to ELK and i am trying to do some handson using the ELK stack. I am performing the following on WINDOWS,
1. Installed Elastic search,confirmed with http://localhost:9200/
2. Installed logstash,confirmed using http://localhost:9600/
logstash -f logstash.config
logstash.config file looks like this,
input {
beats {
port => "5043"
}
}
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }
output {
elasticsearch { hosts => ["localhost:9200"] }
}
3. Installed Kibana, confirmed using http://localhost:5601
Now, i want to use filebeat to pass a log file to logstash which parses and forwards it to Elastic search for indexing. and finally kibana displays it.
In order to do that,
"
i did the following changes in filebeat.yml.
change 1 :
In Filebeat prospectors, i added
paths:
# - /var/log/*.log
- D:\KibanaInput\vinod.log
Contents of vinod.log: Hello World from FileBeat.
Change 2:
In Outputs,
#output.logstash:
# The Logstash hosts
hosts: ["localhost:9600"]
when i run the below command,
filebeat -c filebeat.yml -e
i get the below error,
ERR Connecting error publishing events (retrying): Failed to parse JSON response: json: cannot unmarshal string into Go value of type struct { Number string }
Please let me know what mistake i am doing.
You are in a good path.
Please confirm the following:
in your filebeat.yml make sure that you don't have comment in the output.logstash: line, that correspond to your change number 2.
Make sure your messages are been grok correctly. Add the following output in your logstash pipeline config file.
output {
stdout { codec => json }
}
3.Start your logstash in debug mode.
4.If you are reading the same file with the same content make sure you remove the registry file in filebeat. ($filebeatHome/data/registry)
5.Read the log files.

Logsash - Collecting Fortigate Logs

I've got ELK pulling logs from all my Windows servers and its running great. Working on getting my Fortigate logs in there and I'm having trouble. Here is what I've done so far:
On the Fortigate:
config log syslogd setting
set status enable
set server "ip of my logstash server"
set port 5044
end
config log syslogd filter
set severity warning
end
On the ELK server, under /etc/logstash/conf.d I added a new file named "20-fortigate-filter.conf with the following contents:
filter {
kv {
add_tag => ["fortigate"]
}
}
Then restarted the logstash and kibana services. But I'm not finding the logs in Kibana anywhere. What am I missing?
you need to specify "field_split" and "value_split" too,
try this:
kv {
add_tag => ["fortigate"]
value_split => "="
field_split => ","
}
note: enable csv in fortigate syslog.

Elastic search logstash http input plugin config error

Been looking all over the web for a configuration example of the logstash http input plugin configuration, and tried to follow the once I've found. Still running in to problem with the following configuration:
input {
http {
host => "127.0.0.1"
post => "31311"
tags => "wpedit"
}
}
output {
elasticsearch {hosts => "localhost:9400"}
}
When running service logstash restart it responds with Configuration error. Not restarting. Re-run with configtest parameter for details.
So I ran a configuration test (/opt/logstash/bin/logstash --configtest) and it says everything is fine.
So, my question is, how can I find whats wrong with the configuration? Can you see anything obviously incorrect? I'm fairly new to the world of Elasticsearch, if you could not tell...

Resources