I have the kibana web app running, but it shows an index from last week.
How can I get a new index loaded for today's date? is it as a result of logging?
If so, then I must not be contacting the logstash server.
I followed this article:https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04
Is there anyway to run diagnostics when running as a service?
Any command lines I issue to the service?
Here is my logstash config:
input {
tcp {
type => "apache"
port => 3333
}
}
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
embedded => true
}
}
Related
I am trying to config a filebeat with logstash. At the moment I managed to successfully config filebeat with logstash and I am running into same issues when creating multiple conf files in the logstash.
So currently I have one filebeats input which is something like :
input {
beats {
port => 5044
}
}
filter {
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
pipeline => "%{[#metadata][pipeline]}"
}}
else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
}
}}
And a file Logstash config which is like :
input {
file {
path => "/var/log/foldername/number.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "numberlogtest"
}
}
The grok filter is working as I successfully managed to create 2 index patterns in kibana and view the data correctly.
The problem is that when I am running logstash with both configs applied, logstash is fetching the data from number.log multiple times and logstash plain logs are getting lots of warning, therefore using a lot of computing resources and CPU is going over 80% ( this is an oracle instance ). If I remove the file config from logstash the system is running properly.
I managed to run logstash with each one of these config files applied individually, but not both at once.
I already added an exception in the filebeats config :
exclude_files:
- /var/log/foldername/*.log
Logstash plain logs when running both config files:
[2023-02-15T12:42:41,077][WARN ][logstash.outputs.elasticsearch][main][39aca10fa204f31879ff2b20d5b917784a083f91c2eda205baefa6e05c748820] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"numberlogtest", :routing=>nil}, {"service"=>{"type"=>"system"}
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:607"}}}}}
I already added an exception in the filebeat config :
exclude_files:
- /var/log/foldername/*.log
Fixed by creating a single logstash config with both inputs :
input {
beats {
port => 5044
}
file {
path => "**path**"
start_position => "beginning"
}
}
filter {
if [path] == "**path**" {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
pipeline => "%{[#metadata][pipeline]}"
}
} else {
if [path] == "**path**" {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index2"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
}
}
}
}
Im running logstash inside kubernetes, i have another pods is generating random logs constantly. Grafana is able to detect the loki driver and labels, but the namespace, pod, containers name is not showing in grafana.
if u see the grafana only host, path and type are listed, it is not picking like containers, pods, namespaces.
logstash conf
input {
file {
id => "varlog"
path => "/var/log/containers/"
type => "var log"
start_position => "beginning"
}
}
filter {
if [kubernetes] {
mutate {
add_field => {
"container_name" => "%{[kubernetes][container][name]}"
"namespace" => "%{[kubernetes][namespace]}"
"pod" => "%{[kubernetes][pod][name]}"
}
replace => { "host" => "%{[kubernetes][node][name]}"}
}
}
mutate {
remove_field => ["tags"]
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
logstash.conf:
elasticsearch{
hosts => ["172.31.29.xxx:9200"]
index => "redis-%{+YYYYMMdd}"
}
I used YYYYMMdd to define index ,I want to change the index timezone,How can I do it?
thx
You can use the logstash "date" filter plugin to change log event timezone.
Here is an example of a logstash.conf file I have put together to show you how and where the date filter goes:
input {
file {
...
type => "typeName"
}
}
filter {
if [type] == "typeName" {
grok {
match => {"message" => "^%{TIMESTAMP_ISO8601:event_timestamp}..."}
}
}
date {
match => [ "event_timestamp", "YYYY-MM-dd HH:mm:ss,SSS" ]
timezone => "Etc/GMT"
locale => "en"
}
}
output {
if [type] == "typeName" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "indexName"
}
}
}
you can look at the available timezone values here.
I m trying to compare two logs files haproxy and nginx using ELK stack especially the response time in logtash i have created two separate conf files in logstash for haproxy and nginx, In haproxy I'm getting response time as milliseconds eg: 2334 and in nginx I'm getting it in seconds eg: 1.23.
I want to convert the response time of nginx in milliseconds, I tried to convert it using the ruby filter but not I'm getting proper results also I think it's conflicting with my current elasticsearch index created for haproxy.
Below are my two config files:
Haproxy logstash Conf:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{MONTH:month} %{MONTHDAY:date} %{TIME:time} %{WORD:[source]} %{WORD:[app]}\[%{DATA:[class]}\]: %{IPORHOST:[UE_IP]}:%{NUMBER:[UE_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Source_Port]} %{IPORHOST:[NATTED_IP]}:%{NUMBER:[NATTED_Destination_Port]} %{IPORHOST:[WAN_IP]}:%{NUMBER:[WAN_Port]} \[%{HAPROXYDATE:[accept_date]}\] %{NOTSPACE:[frontend_name]}~ %{NOTSPACE:[backend_name]} %{NOTSPACE:[ty_name]}/%{NUMBER:[response_time]:int} %{NUMBER:[http_status_code]} %{NUMBER:[response_bytes]:int} - - ---- %{NOTSPACE:[df]} %{NOTSPACE:[df]} %{DATA:[domain_name]} %{DATA:[cache_status]} %{DATA:[domain_name]} %{URIPATHPARAM:[content]} HTTP/%{NUMBER:[http_version]}" }
add_tag => [ "response_time", "response_time" ]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
# stdout {
# codec => rubydebug
# }
}
Nginx logstash conf file
input {
beats {
port => 5045
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:content} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} %{NUMBER:response_bytes:int} \"-\" \"%{GREEDYDATA:junk}\" %{NUMBER:response_time}"}
}
ruby {
code => "event.set('response_time', event.get('response_time').to_i * 1000)"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
grok {
match => { "message" => "%{IPV4:clientip} - - [%{HTTPDATE:requesttimestamp}] "%{WORD:httpmethod} /" %{NUMBER:responsecode:int} %{NUMBER:responsesize:int} "-" "-" "-" "%{NUMBER:responsetimems:float}""}
}
ruby {
code => "event.set('responsetimems', event.get('responsetimems').to_i * 1000)"
}
I'm new with ELK stack. I need to parse my customlogs using Grok and then analyze them.
[11/Oct/2018 09:51:47] INFO [serviceManager.management.commands.serviceManagerMonitor:serviceManagerMonitor.py:114] [2018-10-11 07:51:47.527997+00:00] SmMonitoring Module : Launching action info over the service sysstat is delayed
with this grok pattern:
\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}
i tried a grok debugger and it's matched
here is the configuration of logstash for the input:
input {
beats {
port => 5044
}
}
and here is the output configuration:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
and here is the whole filter:
filter {
grok {
match => { "message" => "\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
but i don't get any result with elasticsearch.
Thank you for helping me.