Mask a part of Field number using logstash and copy Original Field - logstash

I was looking for information about masking fields in logstash and I got the following solution:
filter
{
mutate {
gsub => [
"message","[0-9]{16}","################"
]
}
}
however I didn't want to lose the original content field, how do I copy the old content?

Related

How to make $elemMatch work for json array data in mango query?

I have a field in my application like below.
{
"Ct": "HH",
Val:{
"Count":"A",
"Branch":"A"
}
}
When I'm trying to retrieve this using below command in CouchDB, I'm unable to retrieve records.
{
"selector" : {
"Val":{
"$elemMatch":{
"Count":"A"
}
}
}
From the CouchDB documentation,$elemMatch[1]
Matches and returns all documents that contain an array field with at
least one element that matches all the specified query criteria.
Val.Count is not an array field so $elemMatch is not appropriate.
Consider the CouchDB documentation regarding subfield queries[2]:
1.3.6.1.3. Subfields
A more complex selector enables you to specify the values for field of
nested objects, or subfields. For example, you might use a standard
JSON structure for specifying a field and subfield.
Example of a field and subfield selector, using a standard JSON
structure:
{
"imdb": {
"rating": 8
}
}
An abbreviated equivalent uses a dot notation to combine the field and
subfield names into a single name.
{
"imdb.rating": 8
}
Specifically,
selector: {
"Val.Count": "A"
}
1 CouchDB: 1.3.6.1.7. Combination Operators
2 CouchDB: 1.3.6.1.3. Subfields

mutate filter did not working as expected

have some issue with mutate filter where cannot combine any value in the newly created field
here is the example of the code use
mutate {
add_field => {
"myfield" => "%{OrderDate} %{BusinessMinute}"
}
output in kibana for this field is %{OrderDate} %{BusinessMinute} where I'm expecting it is showing the value of this field
As per comments some case sensitive issue that I wrongly use

Logstash filter remove_field for all fields except a specified list of fields

I am parsing a set of data into an ELK stack for some non-tech folks to view. As part of this, I want to remove all fields except a specific known subset of fields from the events before sending into ElasticSearch.
I can explicitly specify each field to drop in a mutate filter like so:
filter {
mutate {
remove_field => [ "throw_away_field1", "throw_away_field2" ]
}
}
In this case, anytime a new field gets added to the input data (which can happen often since the data is pulled from a queue and used by multiple systems for multiple purposes) it would require an update to the filtering, which is extra overhead that's not needed. Not to mention if some sensitive data made it through between when the input streams were updated and when the filtering was updated, that could be bad.
Is there a way using the logstash filter to iterate over each field of an object, and remove_field if it is not in a provided list of field names? Or would I have to write a custom filter to do this? Basically, for every single object, I just want to keep 8 specific fields, and toss absolutely everything else.
It looks like very minimal if ![field] =~ /^value$/ type logic is available in the logstash.conf file, but I don't see any examples that would iterate over the fields themselves in a for each style and compare the field name to a list of values.
Answer:
After upgrading logstash to 1.5.0 to be able to use plugin extensions such as prune, the solution ended up looking like this:
filter {
prune {
interpolate => true
whitelist_names => ["fieldtokeep1","fieldtokeep2"]
}
}
Prune whitelist should be what you're looking for.
For more specific control, dropping to the ruby filter is probably the next step.
Another option would be to move parsed json into new a field and than use mutate,e.g:
filter {
json {
source => "json"
target => "parsed_json"
}
mutate {
add_field => {"nested_field" => "%{[parsed_json][nested_field]}"}
remove_field => [ "json", "parsed_json" ]
}
}

elasticsearch prefix query for multiple words to solve the autocomplete use case

How do I get elastic search to work to solve a simple autocomplete use case that has multiple words?
Lets say I have a document with the following title - Elastic search is a great search tool built on top of lucene.
So if I use the prefix query and construct it with the form -
{
"prefix" : { "title" : "Elas" }
}
It will return that document in the result set.
However if I do a prefix search for
{
"prefix" : { "title" : "Elastic sea" }
}
I get no results.
What sort of query do I need to construct so as to present to the user that result for a simple autocomplete use case.
A prefix query made on Elastic sea would match a term like Elastic search in the index, but that doesn't appear in your index if you tokenize on whitespaces. What you have is elastic and search as two different tokens. Have a look at the analyze api to find out how you are actually indexing your text.
Using a boolean query like in your answer you wouldn't take into account the position of the terms. You would get as a result the following document for example:
Elastic model is a framework to store your Moose object and search
through them.
For auto-complete purposes you might want to make a phrase query and use the last term as a prefix. That's available out of the box using the match_phrase_prefix type in a match query, which was made available exactly for your usecase:
{
"match" : {
"message" : {
"query" : "elastic sea",
"type" : "phrase_prefix"
}
}
}
With this query your example document would match but mine wouldn't since elastic is not close to search there.
To achieve that result, you will need to use a Boolean query. The partial word needs to be a prefix query and the complete word or phrase needs to be in a match clause. There are other tweaks available to the query like must should etc.. that can be applied as needed.
{
"query": {
"bool": {
"must": [
{
"prefix": {
"name": "sea"
}
},
{
"match": {
"name": "elastic"
}
}
]
}
}
}

CouchDb view - key in a list

I Want to query CouchDB and I have a specific need : my query should return the name field of documents corresponding to this condition : the id is equal or contained in a document filed (a list).
For example, the field output is the following :
"output": [
"doc_s100",
"doc_s101",
"doc_s102",
"doc_s103",
],
I want to get all the documents having in their output field "doc_s102" for example.
I wrote a view in a design document :
"backward_by_docid": {
"map": "function(doc) {if(doc.output) emit(doc.output, doc.name)}"
}
but this view works only when I have a unique value in the output field.
How can I resolve this query ?
Thanks !
you have to iterate over the array:
if(doc.output) {
for (var curOutput in doc.output) {
emit (doc.output[curOutput],doc.name);
}
}
make sure that output always is an array (at least [])
.. and, of course use key="xx" instead key=["xxx"]

Resources