How to make multiple outputs on logstash? - logstash

I want to sent to multiple destinations on logstash, here it is my configuration:
output {
elasticsearch {
hosts => "10.10.10.7:9200"
index => "ubuntu18"
}
kafka {
bootstrap_server => "10.10.10.6:9092"
codec => json
topic_id => "beats"
}
}
But it is not working, any idea?

It seems that your kafka output configuration is wrong, it is bootstrap_servers not bootstrap_server.
output {
elasticsearch {
hosts => "10.10.10.7:9200"
index => "ubuntu18"
}
kafka {
bootstrap_servers => "10.10.10.6:9092"
codec => json
topic_id => "beats"
}
}

Related

Logstash config with filebeat issue when using both beats and file input

I am trying to config a filebeat with logstash. At the moment I managed to successfully config filebeat with logstash and I am running into same issues when creating multiple conf files in the logstash.
So currently I have one filebeats input which is something like :
input {
beats {
port => 5044
}
}
filter {
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
pipeline => "%{[#metadata][pipeline]}"
}}
else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
}
}}
And a file Logstash config which is like :
input {
file {
path => "/var/log/foldername/number.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "numberlogtest"
}
}
The grok filter is working as I successfully managed to create 2 index patterns in kibana and view the data correctly.
The problem is that when I am running logstash with both configs applied, logstash is fetching the data from number.log multiple times and logstash plain logs are getting lots of warning, therefore using a lot of computing resources and CPU is going over 80% ( this is an oracle instance ). If I remove the file config from logstash the system is running properly.
I managed to run logstash with each one of these config files applied individually, but not both at once.
I already added an exception in the filebeats config :
exclude_files:
- /var/log/foldername/*.log
Logstash plain logs when running both config files:
[2023-02-15T12:42:41,077][WARN ][logstash.outputs.elasticsearch][main][39aca10fa204f31879ff2b20d5b917784a083f91c2eda205baefa6e05c748820] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"numberlogtest", :routing=>nil}, {"service"=>{"type"=>"system"}
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:607"}}}}}
I already added an exception in the filebeat config :
exclude_files:
- /var/log/foldername/*.log
Fixed by creating a single logstash config with both inputs :
input {
beats {
port => 5044
}
file {
path => "**path**"
start_position => "beginning"
}
}
filter {
if [path] == "**path**" {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
pipeline => "%{[#metadata][pipeline]}"
}
} else {
if [path] == "**path**" {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index2"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
}
}
}
}

Logstash Multiple index based on multiple path

I'm using following configuration file for Logstash to create multiple indices, but they are not visible in Kibana. Logs are parsed, but index is not created. What do I need to change for this to work?
input {
stdin{
type => "stdin-type"
}
file{
tags => ["prod"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/prod/*.log"]
}
file{
tags => ["dev"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/test/*.log"]
}
}
output {
stdout {
codec => rubydebug
}
if "prod" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["prod-log"]
}
}
if "dev" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["dev-log"]
}
}
}

Issue in renaming Json parsed field in Logstash

I am parsing json log file in Logstash. There is a field named #person.name. I tried to rename this field name before sending it to elasticsearch. I also tried to remove the field but I couldn't remove or delete that field because of that my data not getting indexed in Elasticsearch.
Error recorded in elasticsearch
MapperParsingException[Field name [#person.name] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:196)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:308)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
My Logstash config
input {
beats {
port => 11153
}
}
filter
{
if [type] == "person_get" {
##Parsing JSON input to JSON Filter..
json {
source => "message"
}
mutate{
rename => { "#person.name" => "#person-name" }
remove_field => [ "#person.name"]
}
fingerprint {
source => ["ResponseTimestamp"]
target => "fingerprint"
key => "78787878"
method => "SHA1"
concatenate_sources => true
}
}
}
output{
if [type] == "person_get" {
elasticsearch {
index => "logstash-person_v1"
hosts => ["xxx.xxx.xx:9200"]
document_id => "%{fingerprint}" # !!! prevent duplication
}
stdout {
codec => rubydebug
}
} }

logstash output to kafka - topic data in message

I want to create a conf file for logstash that loads data from a file and send it to kafka.
the file is in json format and has the topicId in it.
This is what I have so far..
input {
file {
path => "~/file1.json"
start_position => "beginning"
codec => "json"
}
}
filter {
json {
source => message
}
}
output {
kafka {
bootstrap_servers => "localhost"
codec => plain {
format => "%{message}"
}
topic_id => "???"
}
}
can this be done?
Regards,
Ido
Yes it can be done.
For example if the message json contains a topic_id key like:
"topicId": "topic1"
Then in logstash kafka output plugin:
output {
kafka {
bootstrap_servers => "localhost"
codec => plain {
format => "%{message}"
}
topic_id => "%{topicId}"
}
}

Logstash: Failed parsing date

Summary:
logstash -> elasticsearch --> Failed parsing date shown in debug output
Events in logfile contain field #timestamp (format: 2014-06-18T11:52:45.370636+02:00)
Events are actually processed to elasticsearch but 'failed parsing' errors are shown.
Versions:
Logstash 1.4.1
Elasticsearch 1.20
Is there something I do wrong?
I have log files that contain events like this:
{"#timestamp":"2014-06-18T11:52:45.370636+02:00","Level":"Info","Machine":"X100","Session":{"MainId":0,"SubId":"5otec"},"Request":{"Url":"http://www.localhost:5000/Default.aspx","Method":"GET","Referrer":"http://www.localhost:5000/Default.aspx"},"EndRequest":{"Duration":{"Main":0,"Page":6720}}}
I use this logstash config:
input {
file {
path => "D:/testlogs/*.log"
codec => "json"
start_position => [ "beginning" ]
}
}
filter {
date {
match => [ "#timestamp", "ISO8601" ]
}
}
output {
stdout {
codec => json
}
elasticsearch {
protocol => http
host => "10.125.26.128"
}
}
When I run logstash with this config on the events in the log files I get the following error:
[33mFailed parsing date from field {:field=>"#timestamp", :value=>"2014-06-18T12:18:34.717+02:00", :exception=>#<TypeError: cannot convert instance of class org.jruby.RubyTime to class java.lang.String>
Now the thing is that actually the events are imported in elasticsearch, but I see these errors.
Can this be a problem or can these failed parsing errors being ignored?
You logs are already in json format, so you no need to parse the date.
You can use the json filter to parse all the field and value.
For example, with this configuration:
input {
file {
path => "D:/testlogs/*.log"
codec => "json"
}
}
filter {
json {
source => "message"
}
}
output {
stdout {
codec => rubydebug
}
}
I can parse your log successfully. And the output is :
{
"#timestamp" => "2014-06-18T17:52:45.370+08:00",
"Level" => "Info",
"Machine" => "X100",
"Session" => {
"MainId" => 0,
"SubId" => "5otec"
},
"Request" => {
"Url" => "http://www.localhost:5000/Default.aspx",
"Method" => "GET",
"Referrer" => "http://www.localhost:5000/Default.aspx"
},
"EndRequest" => {
"Duration" => {
"Main" => 0,
"Page" => 6720
}
},
"#version" => "1",
"host" => "ABC",
"path" => "D:/testlogs/*.log"
}
I also have try that the codec => json in file input is not work but is work in stdin input. Maybe this is a bug in logstash.
Hope this can help you.

Resources