I'm looking for some guidance on how to setup my logstash/elastic/kibina config to most effectively query my rabbitmq logs.
What I want to do is query and visualise nested properties within my _source object.
For example my _source might look like this:
{
"ApplicationName": "MyApp",
"JsonPayload":
{
"ExceptionType":"System.InvalidOperationException",
"Message":"Some details about the exception"
}
}
I am able to see ApplicationName and JsonPayload listed in my discover fields in kibana, but I am unable to do anything with the nested fields ExceptionType and Message.
I presume I need to flatten out the JsonPayload object for Kibana to work with it, but I'm unsure where to do this:
Can I write a query in Kibana for this?
Do I do something with elastic search to enable this?
Do I need to adjust some mapping in Logstash?
I'm new to the stack so unsure which of the tools is the most appropriate place to do this.
I'm using logstash and kibana 5.5 if this helps.
Related
Ok, what im hoping is possible here is that given an input document with some json in it, I was considering creating a schema document using this json and then output it to a string which I can then use as a reference for future validation. (we have a LOT of json passed around in our software, so creating a schema from what we have will allow me to catch in CI when changes have been made so we can review for privacy violations).
I realise this isn't at all what the schema validator tools are for. but it looks like they do create internally the schema, does anyone have any idea how I might be able to do this?
Thanks
I am using Logstash-6.3.0 ,Elastic search-6.3.0 and Kibana-6.3.0 combination. I have some fields in kibana which are scripted.
I need to send an alert based on these values. I can send alert for elastic search fields using watcher plugin for kibana.
How do I configure kibana to send alert based on scripted field values?
I am using elastalert,if there are ways?
Solution using elastalert is fine.
I don't think you can query on scripted fields using ElastAlert as scripted field values are computed at the query time (in kibana only) and aren't indexed into the Elasticsearch hence cannot be queried on as ElastAlert only queries Elasticsearch directly.
I'm trying to integrate some code into an existing ELK stack, and we're limited to using filebeats + logstash. I'd like to have a way to configure a grok filter that will allow different developers to log messages in a pre-defined format such that they can capture custom metrics, and eventually build kibana dashboards.
For example, one team might log the following messages:
metric_some.metric=2
metric_some.metric=5
metric_some.metric=3
And another team might log the following messages from another app:
metric_another.unrelated.value=17.2
metric_another.unrelated.value=14.2
Is there a way to configure a single grok filter that will capture everything after metric_ as a new field, along with the value? Everything I've read here seem to indicate that you need to know the field name ahead of time, but my goal is to be able to start logging new metrics without having to add/modify grok filters.
Note: I realize Metricsbeat is probably a better solution here, but as we're integrating with an existing ELK cluster which we do not control, that's not an option for me.
As your messages seems to be a series of key-value pairs, you can use the kv filter instead of grok.
When using grok you need to name the destination field, with kv the name of the destination field will be the same as the key.
The following configuration should work for your case.
filter { kv { prefix => "metric_" } }
For the event metric_another.unrelated.value=17.2 your output will be something like { "another.unrelated.value": "17.2" }
I am starting to work with Otros Log Viewer to analyze my log4j logs,
but couldn't find a way to filter the logs by methods or class names, and couldn't find a way (for example) to count how many errors were made in method "foo".
I would appreciate a nice solution \ tip.
Thanks
To currently class filter do not support filtering by method. You can implement your own filter, here is wiki page.
You can enable Experimental features by adding <loadExperimentalFeatures>true</loadExperimentalFeatures>
to config. It will enable some experimental features like Query filter. It works the same as Query search. On wiki you will find how to construct query: https://code.google.com/p/otroslogviewer/wiki/SearchEvents. Query can look like this method==run && class~=my.packet
You can also use search with "Query mode" and use query: method==run && class~=my.packet and click "Mark all found" and on status bar message like 233 message marked with ....
Trying to access the analyzed/tokenized text in my ElasticSearch documents.
I know you can use the Analyze API to analyze arbitrary text according your analysis modules. So I could copy and paste data from my documents into the Analyze API to see how it was tokenized.
This seems unnecessarily time consuming, though. Is there any way to instruct ElasticSearch to returned the tokenized text in search results? I've looked through the docs and haven't found anything.
This question is a litte old, but maybe I think an additional answer is necessary.
With ElasticSearch 1.0.0 the Term Vector API was added which gives you direct access to the tokens ElasticSearch stores under the hood on per document basis. The API docs are not very clear on this (only mentioned in the example), but in order to use the API you have to first indicate in your mapping definition that you want to store term vectors with the term_vector property on each field.
Have a look at this other answer: elasticsearch - Return the tokens of a field. Unfortunately it requires to reanalyze on the fly the content of your field using the script provided.
It should be possible to write a plugin to expose this feature. The idea would be to add two endpoints to:
allow to read the lucene TermsEnum like the solr TermsComponent does, useful to make auto-suggestions too. Note that it wouldn't be per document, just every term on the index with term frequency and document frequency (potentially expensive with a lot of unique terms)
allow to read the term vectors if enabled, like the solr TermVectorComponent does. This would be per document but requires to store the term vectors (you can configure it in your mapping) and allows also to retrieve positions and offsets if enabled.
You may want to use scripting, however your server should have the scripting enabled.
curl 'http://localhost:9200/your_index/your_type/_search?pretty=true' -d '{
"query" : {
"match_all" : { }
},
"script_fields": {
"terms" : {
"script": "doc[field].values",
"params": {
"field": "field_x.field_y"
}
}
}
}'
The default setting for allowing the script depends on the elastic search version, so please check that out from the official documentation.