Kibana4 geo map Error - not showing the client_ip field - logstash

I am trying to get kibana-4 geo map to work with ELB logs
when i click the discover tab i can clearly see a field geoip.location with values of [lat, lon]
but when i click the visualise tab -> Tile map -> new search -> Geo coordinates
i get an error (not showing anywhere what is the error i've also checked the kibana logs - but nothing is there)
I checked inspect element - also nothing
I then select GeoHash, but the field is empty (when i click on it its blank with a check icon)
How can i see what is the error ?
How can get this map to work ?
my config is:
input {
file {
path => "/logstash_data/logs/elb/**/*"
exclude => "*.gz"
type => "elb"
start_position => "beginning"
sincedb_path => "log_sincedb"
}
}
filter {
if [type] == "elb" {
grok {
match => [
"message", '%{TIMESTAMP_ISO8601:timestamp} %{NGUSERNAME:loadbalancer} %{IP:client_ip}:%{POSINT:client_port} (%{IP:backend_ip}:%{POSINT:backend_port}|-) %{NUMBER:request_processing_time} %{NUMBER:backend_processing_time} %{NUMBER:response_processing_time} %{POSINT:elb_status_code} %{INT:backend_status_code} %{NUMBER:received_bytes} %{NUMBER:sent_bytes} \\?"%{WORD:method} https?://%{WORD:request_subdomain}.server.com:%{POSINT:request_port}%{URIPATH:request_path}(?:%{URIPARAM:query_string})? %{NOTSPACE}"'
]
}
date {
match => [ "timestamp", "ISO8601" ]
target => "#timestamp"
}
if [query_string] {
kv {
field_split => "&?"
source => "query_string"
prefix => "query_string_"
}
mutate {
remove => [ "query_string" ]
}
}
if [client_ip] {
geoip {
source => "client_ip"
add_tag => [ "geoip" ]
}
}
if [timestamp] {
ruby { code => "event['log_timestamp'] = event['#timestamp'].strftime('%Y-%m-%d')"}
}
}
}
}
output {
elasticsearch {
cluster => "ElasticSearch"
host => "elasticsearch.server.com"
port => 9300
protocol => "node"
manage_template => true
template => "/etc/logstash/lib/logstash/outputs/elasticsearch/elasticsearch-template.json"
index => "elb-%{log_timestamp}"
}
}

geo_ip index did not work in my case because my index names did not started with logstash-
if you want the custom index name to get the geo-ip, you must create a template for that index name
in the output for elasticsearch use it
elasticsearch {
manage_template => true
template => "/etc/logstash/templates/custom_template.json"
}
your template should look like this
{
"template" : "index_name-*",
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"message_field" : {
"match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true
}
}
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
}
}
} ],
"properties" : {
"#version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"type" : "object",
"dynamic": true,
"properties" : {
"location" : { "type" : "geo_point" }
}
}
}
}
}
}

On our maps, we specify a field geoip.location which according to the documentation is automatically created by the geoip filter.
Can you see that field in discover? If not, can you try amending your geoip filter to
if [client_ip] {
geoip {
source => "client_ip"
add_tag => [ "geoip" ]
target => "geoip"
}
}
and see if you can now see geoip.location in new entries?
The elasticsearch templates look for the "geoip" target when creating the associated geoip fields.
Once we have the geoip.location being created, we can create a new map with the following steps in Kibana 4.
Click on visualise
Choose 'Tile Map' from the list of visualisation types
Select either new search or saved - we're using a saved search that filters out Apache entries, but as long as the data contains geoip.location you should be good
Select the 'geo coordinates' bucket type - you'll have an error flagged at this point
In 'aggregation' dropdown, select 'geohash'
In 'field' dropdown, select 'geoip.location'

Related

LogStash Conf | Drop Empty Lines

The contents of LogStash's conf file looks like this:
input {
beats {
port => 5044
}
file {
path => "/usr/share/logstash/iway_logs/*"
start_position => "beginning"
sincedb_path => "/dev/null"
#ignore_older => 0
codec => multiline {
pattern => "^\[%{NOTSPACE:timestamp}\]"
negate => true
what => "previous"
max_lines => 2500
}
}
}
filter {
grok {
match => { "message" =>
['(?m)\[%{NOTSPACE:timestamp}\]%{SPACE}%{WORD:level}%{SPACE}\(%{NOTSPACE:entity}\)%{SPACE}%{GREEDYDATA:rawlog}'
]
}
}
date {
match => [ "timestamp", "yyyy-MM-dd'T'HH:mm:ss.SSS"]
target => "#timestamp"
}
grok {
match => { "entity" => ['(?:W.%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|W.%{GREEDYDATA:channel}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}|%{GREEDYDATA:channel})']
}
}
dissect {
mapping => {
"[log][file][path]" => "/usr/share/logstash/iway_logs/%{serverName}#%{configName}#%{?ignore}.log"
}
}
}
output {
elasticsearch {
hosts => "${ELASTICSEARCH_HOST_PORT}"
index => "iway_"
user => "${ELASTIC_USERNAME}"
password => "${ELASTIC_PASSWORD}"
ssl => true
ssl_certificate_verification => false
cacert => "/certs/ca.crt"
}
}
As one can make out, the idea is to parse a custom log employing multiline extraction. The extraction does its job. The log occasionally contains an empty first line. So:
[2022-11-29T12:23:15.073] DEBUG (manager) Generic XPath iFL functions use full XPath 1.0 syntax
[2022-11-29T12:23:15.074] DEBUG (manager) XPath 1.0 iFL functions use iWay's full syntax implementation
which naturally is causing Kibana to report an empty line:
In an attempt to supress this line from being sent to ES, I added the following as a last filter item:
if ![message] {
drop { }
}
if [message] =~ /^\s*$/ {
drop { }
}
The resulting JSON payload to ES:
{
"#timestamp": [
"2022-12-09T14:09:35.616Z"
],
"#version": [
"1"
],
"#version.keyword": [
"1"
],
"event.original": [
"\r"
],
"event.original.keyword": [
"\r"
],
"host.name": [
"xxx"
],
"host.name.keyword": [
"xxx"
],
"log.file.path": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"log.file.path.keyword": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"message": [
"\r"
],
"message.keyword": [
"\r"
],
"tags": [
"_grokparsefailure"
],
"tags.keyword": [
"_grokparsefailure"
],
"_id": "oRc494QBirnaojU7W0Uf",
"_index": "iway_",
"_score": null
}
While this does drop the empty first line, it also unfortunately interferes with the multiline operation on other lines. In other words, the multiline operation does not work anymore. What am I doing incorrectly?
Use of the following variation resolved the issue:
if [message] =~ /\A\s*\Z/ {
drop { }
}
This solution is based on Badger's answer provided on the Logstash forums, where this question was raised as well.

Logstash: send event elsewhere if output failed

Giving the following logstash pipeline:
input
{
generator
{
lines => [
'{"name" : "search", "product" : { "module" : "search" , "name" : "api"}, "data" : { "query" : "toto"}}',
'{"name" : "user_interaction", "product" : { "module" : "search" , "name" : "front"}, "data" : { "query" : "toto"}}',
'{"name" : "search", "product" : { "module" : "search" , "name" : "api"}, "data" : { "query" : "toto"}}',
'{"hello": "world"}',
'{"name" :"wrong data", "data" : "I am wrong !"}',
'{"name" :"wrong data", "data" : { "hello" : "world" }}'
]
codec => json
count => 1
}
}
filter
{
mutate
{
remove_field => ["sequence", "host", "#version"]
}
}
output
{
elasticsearch
{
hosts => ["elasticsearch:9200"]
index => "events-dev6-test"
document_type => "_doc"
manage_template => false
}
stdout
{
codec => rubydebug
}
}
elasticsearch has a strict mapping for this index, hence, some events are giving 400 error "mapping set to strict, dynamic introduction of [hello] within [data] is not allowed" (which is normal).
How to send failed events elsewhere (text logs or another elasticsearch index) (so I dont lost events) ?
Logstash 6.2 introduced Dead Letter Queues that can be used to do what you want. You'll need to enable dead_letter_queue.enable: true in your logstash.yml.
And then just deal with it as an input:
input {
dead_letter_queue {
path => "/path/to/data/dead_letter_queue"
commit_offsets => true
pipeline_id => "main"
}
}
output {
file {
path => ...
codec => line { format => "%{message}"}
}
}
Prior to 6.2, I don't believe there was a way to do what you want.

filebeat to logstash read json file with multiline

Trying to parse this multiline JSON file
{
"eventSource" : { "objectName": "SYSTEM.ADMIN.CHANNEL.EVENT",
"objectType" : "Queue" },
"eventType" : {
"name" : "Channel Event",
"value" : 46
},
"eventReason" : {
"name" : "Channel Blocked",
"value" : 2577
},
"eventCreation" : {
"timeStamp" : "2018/03/07 05:50:19.06 GMT",
"epoch" : 1520401819
},
"eventData" : {
"queueMgrName" : "QMG1",
"connectionName" : "localhost (192.168.10.1)",
"connectionNameList" : [
"localhost"
],
"reasonQualifier" : "Channel Blocked Noaccess",
"channelName" : "SVR.TEST",
"clientUserId" : "test1",
"applName" : "WebSphere MQ Client for Java",
"applType" : "Java"
}
}
filebeat is configured as
filebeat.prospectors:
- type: log
paths:
- /var/log/test2.log
fields:
tags: ['json']
logsource: mqjson
fields_under_root: true
input beats conf is as below.
input {
beats {
port => 5400
host => "192.168.205.11"
ssl => false
#ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
#ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
filter {
if [tags][json] {
json {
source => "message"
}
}
}
In elastic each line is a record.
Question:
How do i parse this multi line json
Also is there an option to extract certain keys, like eventData section.
by adding as below converts the json. There was an issue opened in elastic which was corrected in 6.0
processors:
- decode_json_fields:
fields: ['message']
target: json

converting a nested field to a list json in logstash

I want to convert a field to a json list, sg like this:
"person": {
"name":"XX",
"adress":"124"
}
to
"person": [{"name":"XX",
"adress":"124"}]
Thank you for help.
A bit of ruby magic will do here:
input {
stdin{}
}
filter {
ruby {
code => "
require 'json'
event['res'] = [JSON.parse(event['message'])['person']]
"
}
}
output {
stdout { codec => rubydebug }
}
This will simply parse your message field containing your Json document, then extract the person object and add it to a field.
The test looks as such:
artur#pandaadb:~/dev/logstash$ ./logstash-2.3.2/bin/logstash -f conf_json_list/
Settings: Default pipeline workers: 8
Pipeline main started
{ "person": { "name":"XX", "adress":"124" }}
{
"message" => "{ \"person\": { \"name\":\"XX\", \"adress\":\"124\" }}",
"#version" => "1",
"#timestamp" => "2017-03-15T11:34:37.424Z",
"host" => "pandaadb",
"res" => [
[0] {
"name" => "XX",
"adress" => "124"
}
]
}
As you can see, your hash now lives in a list on index 0.
Hope that helps,
Artur

elapsed + aggregate passing custom fields in Logstash

I am using elapsed plugin to calculate time and aggregate plugin then to display it.
I added custom fields to elapsed filter
You can see it below:
add_field => {
"status" => "Status"
"User" => "%{byUser}"
}
One is static the other one is dynamic coming with event.
On output of logstash it display only static values not dynamic one..
It displays %{byUser} for dynamic one.
But for task id and status fields works just fine and I got right values.
Any idea why?
Little bit more code
elapsed {
unique_id_field => "assetId"
start_tag => "tag1:tag2"
end_tag => "tag3:tag4"
add_field => {
"wasInStatus" => "tag3"
"User" => "%{byUser}"
}
add_tag => ["CustomTag"]
}
grok input:
grok {
match => [
"message", "%{TIMESTAMP_ISO8601:timestamp} %{NUMBER:assetId} %{WORD:event}:%{WORD:event1} User:%{USERNAME:byUser}"]
if "CustomTag" in [tags] and "elapsed" in [tags] {
aggregate {
task_id => "%{assetId}"
code => "event.to_hash.merge!(map)"
map_action => "create_or_update"
}
}
problem is connected with:
elapsed filter:
new_event_on_match => true/false
Change new_event_on_match to false was true in my pipeline fixed issue.but still wonder why.
I also faced similar issue now, and found a fix for it. When new_event_on_match => true is used the elapsed event will be separated from the original log and a new elapsed event will be entered to the ElasticSearch as below
{
"_index": "elapsed_index_name",
"_type": "doc",
"_id": "DzO03mkBUePwPE-nv6I_",
"_version": 1,
"_score": null,
"_source": {
"execution_id": "dfiegfj3334fdsfsdweafe345435",
"elapsed_timestamp_start": "2019-03-19T15:18:34.218Z",
"tags": [
"elapsed",
"elapsed_match"
],
"#timestamp": "2019-04-02T15:39:40.142Z",
"host": "3f888b2ddeec",
"cus_code": "Custom_name", [This is a custom field]
"elapsed_time": 41.273,
"#version": "1"
},
"fields": {
"#timestamp": [
"2019-04-02T15:39:40.142Z"
],
"elapsed_timestamp_start": [
"2019-03-19T15:18:34.218Z"
]
},
"sort": [
1554219580142
]
}
For adding the "cus_code" to the elapsed event object from the original log (log from where the elapsed filter end tag is detected), I added an aggregate filter as below:
if "elapsed_end_tag" in [tags] {
aggregate {
task_id => "%{execution_id}"
code => "map['cus_code'] = event.get('custom_code_field_name')"
map_action => "create"
}
}
and add the end block of aggregation by validating the 'elapsed' tag
if "elapsed" in [tags] {
aggregate {
task_id => "%{execution_id}"
code => "event.set('cus_code', map['cus_code'])"
map_action => "update"
end_of_task => true
timeout => 400
}
}
So to add custom field to elapsed event we need to combine aggregate filter along with elapse filter

Resources