Logstash influxdb output with geoip filter - logstash

I’m trying to use logstash geoip filter to send data about location of IP address to InfluxDB via InfluxDB output plugin.
My logstash conf file is:
input {
file {
path => "/root/geoip_test.txt"
start_position => "beginning"
}
}
filter {
geoip {
source => "message"
fields => ["latitude", "longitude"]
}
}
output {
stdout {codec => rubydebug}
influxdb {
host => "localhost"
port => 8086
db => "metrics"
measurement => "geoip_test"
codec => "json"
use_event_fields_for_data_points => true
}
}
geoip_test.txt file contains only one IP address:
14.143.35.10
Output with error I receive is:
[2020-09-07T12:26:26,696][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"geoip" => {
"latitude" => 12.9771,
"longitude" => 77.5871
},
"message" => "14.143.35.10",
"path" => "/root/geoip_test.txt",
"#timestamp" => 2020-09-07T10:26:33.963Z,
"host" => "test",
"#version" => "1"
}
[2020-09-07T12:26:34,942][WARN ][logstash.outputs.influxdb][main][941178b6897abb80f9a5f7654e5e62ba752d5e20b68781bc62b466e489c2ce56] Non recoverable exception while writing to InfluxDB {:exception=>#<InfluxDB::Error: {"error":"unable to parse 'geoip_test,host=test geoip={\"latitude\"=\u003e12.9771, \"longitude\"=\u003e77.5871},path=\"/root/geoip_test.txt\" 1599474393963': invalid boolean"}
>}
I think geoip filter generates some boolean field which InfluxDB is not able to work with.
Does anyone have any idea what to do with that? Is it possible to set up geoip filter in some way so it wouldn’t generate anything but lon and lat fields?
Any help is really appreciated!

Related

Logstash config with filebeat issue when using both beats and file input

I am trying to config a filebeat with logstash. At the moment I managed to successfully config filebeat with logstash and I am running into same issues when creating multiple conf files in the logstash.
So currently I have one filebeats input which is something like :
input {
beats {
port => 5044
}
}
filter {
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
pipeline => "%{[#metadata][pipeline]}"
}}
else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
}
}}
And a file Logstash config which is like :
input {
file {
path => "/var/log/foldername/number.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "numberlogtest"
}
}
The grok filter is working as I successfully managed to create 2 index patterns in kibana and view the data correctly.
The problem is that when I am running logstash with both configs applied, logstash is fetching the data from number.log multiple times and logstash plain logs are getting lots of warning, therefore using a lot of computing resources and CPU is going over 80% ( this is an oracle instance ). If I remove the file config from logstash the system is running properly.
I managed to run logstash with each one of these config files applied individually, but not both at once.
I already added an exception in the filebeats config :
exclude_files:
- /var/log/foldername/*.log
Logstash plain logs when running both config files:
[2023-02-15T12:42:41,077][WARN ][logstash.outputs.elasticsearch][main][39aca10fa204f31879ff2b20d5b917784a083f91c2eda205baefa6e05c748820] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"numberlogtest", :routing=>nil}, {"service"=>{"type"=>"system"}
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:607"}}}}}
I already added an exception in the filebeat config :
exclude_files:
- /var/log/foldername/*.log
Fixed by creating a single logstash config with both inputs :
input {
beats {
port => 5044
}
file {
path => "**path**"
start_position => "beginning"
}
}
filter {
if [path] == "**path**" {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
pipeline => "%{[#metadata][pipeline]}"
}
} else {
if [path] == "**path**" {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index2"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
}
}
}
}

losgtash grok pattern error with custom log

I'm new with ELK stack. I need to parse my customlogs using Grok and then analyze them.
[11/Oct/2018 09:51:47] INFO [serviceManager.management.commands.serviceManagerMonitor:serviceManagerMonitor.py:114] [2018-10-11 07:51:47.527997+00:00] SmMonitoring Module : Launching action info over the service sysstat is delayed
with this grok pattern:
\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}
i tried a grok debugger and it's matched
here is the configuration of logstash for the input:
input {
beats {
port => 5044
}
}
and here is the output configuration:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
and here is the whole filter:
filter {
grok {
match => { "message" => "\[(?<timestamp>%{MONTHDAY}\/%{MONTH}\/%{YEAR} %{TIME})\] %{LOGLEVEL:loglevel} \[%{GREEDYDATA:messagel}\] \[%{GREEDYDATA:message2}\] %{GREEDYDATA:message3}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
but i don't get any result with elasticsearch.
Thank you for helping me.

Issue in renaming Json parsed field in Logstash

I am parsing json log file in Logstash. There is a field named #person.name. I tried to rename this field name before sending it to elasticsearch. I also tried to remove the field but I couldn't remove or delete that field because of that my data not getting indexed in Elasticsearch.
Error recorded in elasticsearch
MapperParsingException[Field name [#person.name] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:196)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:308)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
My Logstash config
input {
beats {
port => 11153
}
}
filter
{
if [type] == "person_get" {
##Parsing JSON input to JSON Filter..
json {
source => "message"
}
mutate{
rename => { "#person.name" => "#person-name" }
remove_field => [ "#person.name"]
}
fingerprint {
source => ["ResponseTimestamp"]
target => "fingerprint"
key => "78787878"
method => "SHA1"
concatenate_sources => true
}
}
}
output{
if [type] == "person_get" {
elasticsearch {
index => "logstash-person_v1"
hosts => ["xxx.xxx.xx:9200"]
document_id => "%{fingerprint}" # !!! prevent duplication
}
stdout {
codec => rubydebug
}
} }

Logstash keep syslog host

I have a syslog server and the ELK stack on the same server. I have a directory for each syslog source.
I'm trying to parse syslog files with Logstash, and I'd like to keep the ip adress or the hostname of the syslog source in the "host" field. At the moment I have the 0.0.0.0 source after Logstash parsing.
My logstash.conf :
input {
file {
path => ["path/to/file.log"]
start_position => "beginning"
type => "linux-syslog"
ignore_older => 0
}
}
filter {
if [type] == "linux-syslog" {
grok {
match => {"message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
}
}
}
output {
elasticsearch {
hosts => ["#IP_Elastic:Port_Elastic"]
}
stdout { codec => rubydebug }
}
you can overwrite your host with your ip variable once you have parsed it. Consider this example:
Pipeline main started
{"ip":"1.2.3.4"}
{
"message" => "{\"ip\":\"1.2.3.4\"}",
"#version" => "1",
"#timestamp" => "2016-08-10T13:36:18.875Z",
"host" => "pandaadb",
"ip" => "1.2.3.4",
"#host" => "1.2.3.4"
}
I am parsing json to get the IP. Then I write the IP field into the host.
The filter:
filter {
# this parses the ip json
json {
source => "message"
}
mutate {
add_field => { "#host" => "%{ip}" }
}
}
replace %{ip} with whatever field contains your ip address.
Cheers,
Artur

Logstash: Failed parsing date

Summary:
logstash -> elasticsearch --> Failed parsing date shown in debug output
Events in logfile contain field #timestamp (format: 2014-06-18T11:52:45.370636+02:00)
Events are actually processed to elasticsearch but 'failed parsing' errors are shown.
Versions:
Logstash 1.4.1
Elasticsearch 1.20
Is there something I do wrong?
I have log files that contain events like this:
{"#timestamp":"2014-06-18T11:52:45.370636+02:00","Level":"Info","Machine":"X100","Session":{"MainId":0,"SubId":"5otec"},"Request":{"Url":"http://www.localhost:5000/Default.aspx","Method":"GET","Referrer":"http://www.localhost:5000/Default.aspx"},"EndRequest":{"Duration":{"Main":0,"Page":6720}}}
I use this logstash config:
input {
file {
path => "D:/testlogs/*.log"
codec => "json"
start_position => [ "beginning" ]
}
}
filter {
date {
match => [ "#timestamp", "ISO8601" ]
}
}
output {
stdout {
codec => json
}
elasticsearch {
protocol => http
host => "10.125.26.128"
}
}
When I run logstash with this config on the events in the log files I get the following error:
[33mFailed parsing date from field {:field=>"#timestamp", :value=>"2014-06-18T12:18:34.717+02:00", :exception=>#<TypeError: cannot convert instance of class org.jruby.RubyTime to class java.lang.String>
Now the thing is that actually the events are imported in elasticsearch, but I see these errors.
Can this be a problem or can these failed parsing errors being ignored?
You logs are already in json format, so you no need to parse the date.
You can use the json filter to parse all the field and value.
For example, with this configuration:
input {
file {
path => "D:/testlogs/*.log"
codec => "json"
}
}
filter {
json {
source => "message"
}
}
output {
stdout {
codec => rubydebug
}
}
I can parse your log successfully. And the output is :
{
"#timestamp" => "2014-06-18T17:52:45.370+08:00",
"Level" => "Info",
"Machine" => "X100",
"Session" => {
"MainId" => 0,
"SubId" => "5otec"
},
"Request" => {
"Url" => "http://www.localhost:5000/Default.aspx",
"Method" => "GET",
"Referrer" => "http://www.localhost:5000/Default.aspx"
},
"EndRequest" => {
"Duration" => {
"Main" => 0,
"Page" => 6720
}
},
"#version" => "1",
"host" => "ABC",
"path" => "D:/testlogs/*.log"
}
I also have try that the codec => json in file input is not work but is work in stdin input. Maybe this is a bug in logstash.
Hope this can help you.

Resources