logstash is not picking namespace, pod, container_name - logstash

Below is my logastsh configuration. Grafana is unable to understand the namespace, pod, container_name send by logstash
input {
file{
path => "/host/var/log/pods/**/*.log"
type => "kubernetes"
start_position => "beginning"
}
}
filter {
if \[kubernetes\] {
mutate {
add_field => {
"container_name" => "%{\[kubernetes\]\[container\]\[name\]}"
"namespace" => "%{\[kubernetes\]\[namespace\]}"
"pod" => "%{\[kubernetes\]\[pod\]\[name\]}"
}
replace => { "host" => "%{\[kubernetes\]\[node\]\[name\]}"}
}
}
mutate {
remove_field => \["tags"\]
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}

Related

Some of the attributes missing in log-stash logs

input {
http {
port => 8080
codec => json
}
}
filter part:
map['username'] ||= event.get('username');
map['error'] ||= event.get('message');
map['filename'] ||= event.get('filename');
map['line'] ||= event.get('line');
output {
stdout {
codec => rubydebug
}
if [type] == "client" {
elasticsearch {
hosts => ["${LOGSTASH_OUTPUT_HOST}"]
index => "%{[#metadata][target_index_client]}"
user => "${LOGSTASH_OUTPUT_USER:}"
password => "${LOGSTASH_OUTPUT_PASS:}"
manage_template => false
}
} else if [type] == "server" {
elasticsearch {
hosts => ["${LOGSTASH_OUTPUT_HOST}"]
index => "%{[#metadata][target_index_server]}"
user => "${LOGSTASH_OUTPUT_USER:}"
password => "${LOGSTASH_OUTPUT_PASS:}"
manage_template => false
}
}
}
**We getting most of the attributes but some the important attribute missing in our logstash ** please suggest how to fix this issue

How do we configure module wise configuration in logstash of ELK

I need to create a module wise dashboard like User management, campaign management. How do I configure in logstash to pull all logs from different log files?
Logstash configuration:
input {
beats {
port => 5044
ssl => false
}
file {
path => "C:\data\logs\OCDE.log"
type => "ocde"
}
file {
path => "C:\data\logs\CLM.log"
type => "clm"
}
}
filter {
if [type] == "ocde"{
grok {
match => [ "message" , "%{COMBINEDAPACHELOG}"]
}
}
else if [type] == "clm" {
grok {
match => [ "message" , "%{COMBINEDAPACHELOG}"]
}
}
}
output {
if (document_type= backendlog) {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "enliven_be_log_yyyymmdd"
document_type => "%{[#metadata][type]}"
}
}
}

ELK Stack - Kibana showing only logs of one log file

I parsed two access log files using logstash and in the command prompt it shows both the log files being parsed. When I checked the elasticsearch head in the "Any Request" tab, there also it shows all the parsed logs.
But when I try to view it on Kibana, it shows the logs of only the first file. How can I view the logs of the other file too?
This is my .conf
input {
file {
path => ["G:/logstash-1.5.0/bin/tmp/*_log"]
start_position => "beginning"
}
}
filter {
if [path] =~ "access" {
mutate { replace => { type => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
} else if [path] =~ "error" {
mutate { replace => { type => "apache_error" } }
} else {
mutate { replace => { type => "random_logs" } }
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}

Logstash cause the data loss

I use the Logstash to process my web logs, but I find a problem about the data loss.
Now I have the 100 lines logs. I get the result which is less than 100 lines in sometimes after processing via Logstash. The weird thing is it doesn't display any error message.
The following code is my config of Logstash:
input {
file {
path => "/home/jhowliu/Work/Log/201506/testing.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
columns => ["ip", "time", "request", "status", "refer", "browser"]
}
grok {
match => {
"time" => "%{MONTHDAY:day}/%{MONTH:month}/%{YEAR:year}:%{TIME:time}"
}
overwrite => ["time"]
}
mutate {
replace => {"time" =>"%{day}-%{month}-%{year} %{time}" }
}
if [request] != "-" {
grok {
match => {
"request" => "%{URIPATH:dest_path}"
}
}
}
if [refer] != "-" {
grok {
match => {
"refer" => "%{URIHOST}%{URIPATH:source_path}"
}
}
}
}
output {
csv {
fields => ["time", "ip", "dest_path", "source_path", "status"]
path => "/home/jhowliu/testing.log"
}
}

can logstash process multiple output simultaneously?

i'm very new to logstash and elastic search. I am trying to store log files both in elasticsearch and a flat file. I know that logstash support both output. But are they processed simultaneously? or is it done periodically through a job?
Yes you can do this like so by tagging and cloning your inputs with the "add_tag" command on your shipper config.
input
{
tcp { type => "linux" port => "50000" codec => plain { charset => "US-ASCII" } }
tcp { type => "apache_access" port => "50001" codec => plain { charset => "US-ASCII" } }
tcp { type => "apache_error" port => "50002" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_security" port => "50003" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_application" port => "50004" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_system" port => "50005" codec => plain { charset => "US-ASCII" } }
udp { type => "network_equipment" port => "514" codec => plain { charset => "US-ASCII" } }
udp { type => "firewalls" port => "50006" codec => plain }
}
filter
{
grok { match => [ "host", "%{IPORHOST:ipaddr}(:%{NUMBER})?" ] }
mutate { replace => [ "fqdn", "%{ipaddr}" ] }
dns { reverse => [ "fqdn", "fqdn" ] action => "replace" }
if [type] == "linux" { clone { clones => "linux.log" add_tag => "savetofile" } }
if [type] == "apache_access" { clone { clones => "apache_access.log" add_tag => "savetofile" } }
if [type] == "apache_error" { clone { clones => "apache_error.log" add_tag => "savetofile" } }
if [type] == "windows_security" { clone { clones => "windows_security.log" add_tag => "savetofile" } }
if [type] == "windows_application" { clone { clones => "windows_application.log" add_tag => "savetofile" } }
if [type] == "windows_system" { clone { clones => "windows_system.log" add_tag => "savetofile" } }
if [type] == "network_equipment" { clone { clones => "network_%{fqdn}.log" add_tag => "savetofile" } }
if [type] == "firewalls" { clone { clones => "firewalls.log" add_tag => "savetofile" } }
}
output
{
#stdout { debug => true }
#stdout { codec => rubydebug }
redis { host => "1.1.1.1" data_type => "list" key => "logstash" }
}
And on your main logstash instance you would do this:
input {
redis {
host => "1.1.1.1"
data_type => "list"
key => "logstash"
type=> "redis-input"
# We use the 'json' codec here because we expect to read json events from redis.
codec => json
}
}
output
{
if "savetofile" in [tags] {
file {
path => [ "/logs/%{fqdn}/%{type}" ] message_format => "%{message}"
}
}
else { elasticsearch { host => "2.2.2.2" }
}
}
FYI, You can study The life of logstash event about the logstash event.
The output worker model is currently a single thread. Outputs will receive events in the order they are defined in the config file.
But the Outputs may decide to buffer events temporarily before publishing them. Ex: Output will buffers 2 or 3 events then just it write to file.
First you need to install output plugins:
/usr/share/logstash/bin/logstash-plugin install logstash-output-elasticsearch
/usr/share/logstash/bin/logstash-plugin install logstash-output-file
Then create conf files for output:
cat /etc/logstash/conf.d/nfs-output.conf
output {
file {
path => "/your/path/filebeat-%{+YYYY-MM-dd}.log"
}
}
cat /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["elasitc_ip:9200"]
manage_template => true
user => "elastic"
password => "your_password"
}
}
Then:
service logstash restart

Resources