LogStash Ruby for dividing fields and stored into new field - logstash

Below is my event.
{
"system":{
"cpu":{
"cores":2,
"system":{
"pct":1.1988
},
"user":{
"pct":0.5487
}
}
},
"type":"metricsets"
}
the value of system.cpu.user.pct should be divided by system.cpu.cores and new value should be stored in system.cpu.user.pct.
I tried as mentioned below but it did not work:
ruby {
code => "event.set('system.cpu.user.pct', system.cpu.user.pct / system.cpu.cores)"
}
ruby {
code => "event['system.cpu.user.pct'] = event['system.cpu.user.pct'] / event['system.cpu.cores']"
}
ruby {
code => "event['[system][cpu][user][pct]'] = event['[system][cpu][user][pct]'] / event['[system][cpu][cores]']"
};

This worked:
ruby {
code => "event.set('[system][cpu][user][pct]',
event.get('[system][cpu][user][pct]') / event.get('[system][cpu][cores]'))"
}
You can read more on how it works here.

Related

Using Postgraphile in nodeJS, how to enable aggregate max of date field?

I am using postgraphile in NodeJS for graphql API based on Postgresql database. I need to get max(date_field), but postgraphile does not provide that option by default.
How can I enable aggregation of max on a date field?
I want something as follows. But inspection_Date field is not available under max
query Query {
allRooms {
aggregates {
max {
inspection_date
}
}
}
}
Using a slightly modified version of the approach outlined in the defining your own aggregates section of the pg-aggregates readme, you can create a new graphile plugin that uses a hook to modify the existing aggregate specs for "min" and "max" to use a different isSuitableType function that includes temporal types as well as numeric types:
import type { Plugin } from "graphile-build";
import type { AggregateSpec } from "#graphile/pg-aggregates/dist/interfaces";
import type { PgType } from "graphile-build-pg";
const addTemporalAggregatesPlugin: Plugin = (builder) => {
builder.hook(
"build",
(build) => {
const { pgAggregateSpecs } = build;
const isNumberLikeOrTemporal = (pgType: PgType): boolean =>
pgType.category === "N" || pgType.category === "D";
// modify isSuitableType for max and min aggregates
// to include temporal types see: https://www.postgresql.org/docs/current/catalog-pg-type.html
const specs = (pgAggregateSpecs as AggregateSpec[] | undefined)?.map(
(spec) => {
if (spec.id === "min" || spec.id === "max") {
return {
...spec,
isSuitableType: isNumberLikeOrTemporal,
};
}
return spec;
}
);
if (!specs) {
throw Error(
"Please that the pg-aggregates plugin is present and that AddTemporalAggregatesPlugin is appended AFTER it!"
);
}
const newBuild = build.extend(build, {});
// add modified aggregate specs to the build
newBuild.pgAggregateSpecs = specs;
return newBuild;
},
["AddTemporalAggregatesPlugin"],
// ensure this hook fires before other hooks in the pg-aggregates plugin
// that may depend on the "pgAggregatesSpecs" extension.
["AddGroupByAggregateEnumsPlugin"],
[]
);
};
export default addTemporalAggregatesPlugin;
Then just append this new plugin after the pg-aggregates plugin:
postgraphile(pool, "my_schema", {
pluginHook,
appendPlugins: [
PgAggregatesPlugin,
AddTemporalAggregatesPlugin,
],
// ...
})

List json parsing

I have a probleme to create a list of json object. this is my config:
input {
file {#settings...}
}
filter {
mutate {
add_field => {"book_title" => "%{Book_title}"}
add_field => {"book_price" => "%{Book_price}"}
}
mutate {
rename => { "book_title" => "[book_market][book_title]"
"book_price" => "[book_market][book_price]" }
}
}
I want book_market be list for some reason even it has one object
ruby {
code => "
event['book_market'] = [event['book_market']]
"
}
}
the result here after execution:
"book_market" : [ {
"book_title":"title",
"book_price":"price"
}]
it's that I want !
But in a second time a new book arrives from another file... the probleme here with the same config I lost the first json object because book_market is overwritten.. I want to insert the new object in book_market list.
Thank you for help !

Evaluate field as expression in Logstash filter

I have one custom field in Logstash event defined as expression:
{ "customIndex" => "my-service-%{+YYYY.MM}" }
And filter that calculates index name for elasticsearch output plugin:
filter {
if [customIndex] {
mutate {
add_field => { "indexName" => "custom-%{customIndex}" }
}
} else {
mutate {
add_field => { "indexName" => "common-%{+YYYY.MM.dd}" }
}
}
}
But for custom index it creates invalid name custom-my-service-%{+YYYY.MM} and does not evaluate %{+YYYY.MM} expression.
Is it possible to evaluate field and get custom-my-service-2016.11?
If you can reformat your created field to this:
{ "customIndex" => "my-service-%Y.%m" }
Then this Ruby filter will do the trick:
ruby {
init => "require 'date'"
code => "event['indexName'] = 'custom-' + Date.today.strftime(event['customIndex'])"
}
Here is a documentation on placeholders you can use.

How to call another filter from within a ruby filter in logstash.

I'm building out logstash and would like to build functionality to anonymize fields as specified in the message.
Given the message below, the field fta is a list of fields to anonymize. I would like to just use %{fta} and pass it through to the anonymize filter, but that doesn't seem to work.
{ "containsPII":"True", "fta":["f1","f2"], "f1":"test", "f2":"5551212" }
My config is as follows
input {
stdin { codec => json }
}
filter {
if [containsPII] {
anonymize {
algorithm => "SHA1"
key => "123456789"
fields => %{fta}
}
}
}
output {
stdout {
codec => rubydebug
}
}
The output is
{
"containsPII" => "True",
"fta" => [
[0] "f1",
[1] "f2"
],
"f1" => "test",
"f2" => "5551212",
"#version" => "1",
"#timestamp" => "2016-07-13T22:07:04.036Z",
"host" => "..."
}
Does anyone have any thoughts? I have tried several permutations at this point with no luck.
Thanks,
-D
EDIT:
After posting in the Elastic forums, I found out that this is not possible using base logstash functionality. I will try using the ruby filter instead. So, to ammend my question, How do I call another filter from within the ruby filter? I tried the following with no luck and honestly can't even figure out where to look. I'm very new to ruby.
filter {
if [containsPII] {
ruby {
code => "event['fta'].each { |item| event[item] = LogStash::Filters::Anonymize.execute(event[item],'12345','SHA1') }"
add_tag => ["Rubyrun"]
}
}
}
You can execute the filters from ruby script. Steps will be:
Create the required filter instance in the init block of inline ruby script.
For every event call the filter method of the filter instance.
Following is the example for above problem statement. It will replace my_ip field in event with its SHA1.
Same can be achieved using ruby script file.
Following is the sample config file.
input { stdin { codec => json_lines } }
filter {
ruby {
init => "
require 'logstash/filters/anonymize'
# Create instance of filter with applicable parameters
#anonymize = LogStash::Filters::Anonymize.new({'algorithm' => 'SHA1',
'key' => '123456789',
'fields' => ['my_ip']})
# Make sure to call register
#anonymize.register
"
code => "
# Invoke the filter
#anonymize.filter(event)
"
}
}
output { stdout { codec => rubydebug {metadata => true} } }
Well, I wasn't able to figure out how to call another filter from within a ruby filter, but I did get to the functional goal.
filter {
if [fta] {
ruby {
init => "require 'openssl'"
code => "event['fta'].each { |item| event[item] = OpenSSL::HMAC.hexdigest(OpenSSL::Digest::SHA256.new, '123456789', event[item] ) }"
}
}
}
If the field FTA exists, it will SHA2 encode each of the fields listed in that array.

Add data to dynamic nested hash in logstash

I want to put a value into part of a nested hash, but name that part depending on upstream filters. This is to refactor and reduce overall code size as currently each of the 20+ incoming event types have their own section like this with 18 lines in the logstash file (but currently the %{detail_part} bit is hard-coded).
# Working code
filter {
if [result] == 0 {
# Success
mutate {
add_field => {
"[Thing][ThingDetail][OtherThing][MoreDetail]" => "true"
}
}
}
else {
# Failed
mutate {
add_field => {
"[Thing][ThingDetail][OtherThing][MoreDetail]" => "false"
}
}
}
}
Above is hard-coded to "OtherThing". Below has a variable, but doesn't work.
# Non-Working code
filter {
if [result] == 0 {
# Success
mutate {
add_field => {
"[Thing][ThingDetail][%{detail_part}][MoreDetail]" => "true"
}
}
}
else {
# Failed
mutate {
add_field => {
"[Thing][ThingDetail][%{detail_part}][MoreDetail]" => "false"
}
}
}
}
In the above (non-Working code), detail_part is set in an upstream filter to a string value like "OtherThing". This currently compiles and runs, but no XML is output from it, so I don't think anything is set into the hash as a result of these statements.
I know it can be done with embedded Ruby code, but I would like a way that is as simple as possible. The output of this process is going to XML so I am constrained to use this kind of nested hash.
Is this possible with Logstash?
It turns out that yes, Logstash supports this, just the syntax was wrong. So here is the fix:
filter {
# This field could be set conditionally in an if statement ...
mutate { add_field => { "[detail_part]" => "Result" } }
if [result] == 0 {
# Success
mutate {
add_field => {
"[Thing][ThingDetail][%{[detail_part]}][MoreDetail]" => "true"
}
}
}
else {
# Failed
mutate {
add_field => {
"[Thing][ThingDetail][%{[detail_part]}][MoreDetail]" => "false"
}
}
}
}
I just couldn't find any non-trivial examples that did things like this.

Resources