NodeJS Bunyan Logstash Log Indexing - node.js

I am using Bunyan and bunyas-lumberjack to send my logs to log stash and index them in elastic search. The problem I am facing is when I am filtering the logs: I am using a basic filter for Logstash :
filter {
if [type == "json"]{
json {
source => "message"
}
}
}
that puts the JSON from bunyan into the source.message field and indexes it in elastic search. How can I index every field from bunyan into a particular elastic search field so I can search over it or use it in Kibana ?
I am attaching what I have obtained now and what I want to obtain as example.
Currently:
{
"_index": "logstash-2015.10.26",
"_type": "json",
"_id": "AVCjvDHWHiX5VLMgQZIC",
"_score": null,
"_source": {
"message": "{\"name\":\"myLog\",\"hostname\":\"atnm-4.local\",\"pid\":6210,\"level\":\"error\",\"message\":\"This should work!\",\"#timestamp\":\"2015-10-26T10:40:29.503Z\",\"tags\":[\"bunyan\"],\"source\":\"atnm-4.local/node\"}",
"#version": "1",
"#timestamp": "2015-10-26T10:40:31.184Z",
"type": "json",
"host": "atnm-4.local",
"bunyanLevel": "50"
},
Wanted:
{
"_index": "logstash-2015.10.26",
"_type": "json",
"_id": "AVCjvDHWHiX5VLMgQZIC",
"_score": null,
"_source": {
"message": {
"name": example,
"hostname": example,
"etc": example

Each input in logstash can have different codec and type. In your case, if you want to index bunyan and syslog, you'll have two inputs with two different types. The syslog will have codec "plain", the bunyan will have "json". You do not need any filter for the bunyan messages. The json will be parsed and the fields will appear automagically. You will have to have a filter to parse the syslog input.

Related

Logback losghstash appender add own field

I need to send application logs directly to logstash using: Logstash Logback Encoder from multiple microservices. Problem is that when I am sending logs logstash recive logs like this:
{
"_index": "logstash-2021.01.21-000001",
"_type": "_doc",
"_id": "id",
"_version": 1,
"_score": 1.6928859,
"_source": {
"#timestamp": "2021-01-21T14:13:05.480Z",
"#version": "1",
"message": "message",
"host": "gateway",
"port": 43892
},
"fields": {
"#timestamp": [
"2021-01-21T14:13:05.480Z"
]
},
"highlight": {
"message": [msg]
},
"sort": [ sort ]
}
I need to add a custom field in "fields" section or in general section. Do you have any idea how I can do this?
You can use mutate filter in your logstash configuration file.
For example, into logstash configuration your file, this looks like this :
filter {
mutate { add_field => { "field_name" => "field_value" } }
}

In elasticsearch 2 docs with same ID

Hi I'm new to Elasticsearch and I'm using Elasticsearch version 5.6 as I know _id of every doc in Elasticsearch is unique.
but while re-indexing logs I found that. some of doc have same _id. for example below two logs
have same id. How is it possible?
{
"_index": "orders",
"_type": "pending",
"_id": "1473531",
"_score": 1,
"_routing": "44540",
"_parent": "44540",
"_source": {
"id": 1473531,
"level": "info",
"type": "pending",
"status": "",
"message": "Order marked cancelled by system"
}
}
{
"_index": "orders",
"_type": "confirmed",
"_id": "1473531",
"_score": 1,
"_source": {
"id": 1473531,
"source_address": "Independence, MO 64055",
"dest_address": "MO 64138",
"short_source": "Select Physical Therapy",
"short_dest": "Home",
"customer_remarks": null,
"source_lat_long": ["39.0334554", "-94.3761432"],
"dest_lat_long": ["38.986449", "-94.4661768"]
}
}
This is because, your type in the index is different.
first document has index orders but type as pending while other document has same index orders but type is confirmed.
In latest ES version types are removed, refer removal of types for more info.

logstash grok pattern to monitor logstash itself

I would like to add logstash.log log into my ELK stack but I always have grokparsefailure.
My pattern is OK on http://grokconstructor.appspot.com/do/match#result
My logstash conf file (filter part) is
filter {
if [application] == "logstash" {
grok {
match => { "message" => "\{:timestamp=>\"%{TIMESTAMP_ISO8601:timestamp}\", :message=>%{GREEDYDATA:errormessage}\}" }
}
date {
match => [ "timestamp" , "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ" ]
}
}
}
But still In only get
{
"_index": "logstash-2016.05.03",
"_type": "logs",
"_id": "AVR3WUtpT8BPcJ-gVynN",
"_score": null,
"_source": {
"#version": "1",
"#timestamp": "2016-05-03T16:00:20.708Z",
"path": "/var/log/logstash/logstash.log",
"host": "xxx.arte.tv",
"application": "logstash",
"tags": [
"_grokparsefailure"
]
I guess I have issue with either { ou " but with or without backslashing theim, still grokparsefailure.
Shame on me, there is no error in my previous post, problem was no message because of a remove_field message in another conf file.
Sorry guys for the waste of time

Logstash/Kibana GeoIP not working

I am attempting to create GeoIP data using an ELK stack, which can be visualized in Kibana.
I have recently installed an ELK stack (Elastic Search, Logstash, and Kibana) on a virtual instance of Ubuntu Server 14.04. I am using Bro to capture logs.
Everything to do with capturing the logs, parsing them, and viewing them in Kibana is working great, except for GeoIP (one of the most interesting features!).
The GeoIP portion of my logstash config file looks like this;
geoip {
add_tag => [ "geoip" ]
database => "/etc/logstash/GeoLiteCity.dat"
source => "id.orig_h"
target => "geoip"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}"]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
I had gotten that part of the filter from the following guide to setup GeoIP with Kibana. I have also seen that filter used in a few other places.
I have attempted to simplify the filter (just source, target, database), but no luck.
When I do a curl request for my index, especially with the add_tag ["geoip"] it returns blank data;
"geoip" : {
"dynamic" : "true",
"properties" : {
"location" : {
"type" : "geo_point"
}
}
Finally, here is some scrubbed data that I have taken directly from Kibana, in JSON format;
{
"_index": "logstash-2015.11.12",
"_type": "bro-conn_log",
"_id": "*****",
"_score": null,
"_source": {
"message": [
"*****"
],
"#version": "1",
"#timestamp": "2015-11-12T13:43:16.205Z",
"host": "elk",
"path": "/nsm/bro/logs/current/conn.log",
"type": "bro-conn_log",
"ts": "*****",
"uid": "*****",
"id.orig_h": "*****",
"id.orig_p": *****,
"id.resp_h": "*****",
"id.resp_p": *****,
"proto": "*****",
"service": "*****",
"duration": *****,
"orig_bytes": *****,
"resp_bytes": *****,
"conn_state": "*****",
"local_orig": "*****",
"missed_bytes": *****,
"history": "*****",
"orig_pkts": *****,
"orig_ip_bytes": *****,
"resp_pkts": *****,
"resp_ip_bytes": *****,
"tunnel_parents": "*****",
"column21": "(empty)",
"conn_state_full": "*****"
},
"fields": {
"#timestamp": [
1447335796205
]
},
"sort": [
1447335796205
]
}
To summarize: I am attempting to get GeoIP data working with an ELK stack. Despite following guides describing how to do exactly that, the GeoIP field will not display in Kibana. Any advice would be GREATLY appreciated.
Very silly solution. The IP addresses that I was looking at were all internal. I assumed (incorrectly) that it would just generate empty GeoIP data for any un-resolvable IP addresses. But, as the geoIP documentation states:
Starting with version 1.3.0 of Logstash, a [geoip][location] field is created if the GeoIP lookup returns a latitude and longitude.
So without the Long and Lat a GeoIP field is never created. This has been confirmed by moving the machine to a more open network, and immediately seeing the GeoIP tag with the same filter above.

indexing couchdb using elastic search

HI I have installed elasticsearch version 0.18.7 and configured couchdb according to these instructions. I am trying to create indexing in the following way:
curl -XPUT '10.50.10.86:9200/_river/tasks/_meta' -d '{
"type": "couchdb",
"couchdb": {
"host": "10.50.10.86",
"port": 5984,
"db": "tasks",
"filter": null
},
"index": {
"index": "tasks",
"type": "tasks",
"bulk_size": "100",
"bulk_timeout": "10ms"
}
}'
and got the message like,
{
"ok": true,
"_index": "_river",
"_type": "tasks",
"_id": "_meta",
"_version": 2
}
when trying to access the url like
curl -GET 'http://10.50.10.86:9200/tasks/tasks?q=*&pretty=true'
then
{
"error": "IndexMissingException[[tasks] missing]",
"status": 404
}
Please guide me how to indexing couchdb using elasticsearch.
I'm not sure where es_test_db2 is coming from. What's the output of this?
curl 10.50.10.86:9200/_river/tasks/_status\?pretty=1

Resources