As always, official doc lacks of examples.
I have a filter that calls an API and has to adds fields parsing the API result:
http {
url => "rest/api/subnet/check_subnet_from_ip/"
query => { "ip" => "%{[source][ip]}" }
verb => GET
}
API response is something like this:
{ "name": "SUBNET1", "cidr": "192.168.0.0/24" }
I need to add new fields with these results.
I need to consider an empty result {}
I cant find any example about parsing the results.
Thanks.
Your response is a json document, you need to use the json filter to parse it, look at the documentation of the json filter for all the options available.
But basically you will need something like this:
http {
url => "rest/api/subnet/check_subnet_from_ip/"
query => { "ip" => "%{[source][ip]}" }
verb => GET
target_body => api_response
}
json {
source => "api_response"
}
To add new fields you need to use the mutate filter, look at the documentation of the mutate filter for all options available.
To add a new field you need something like this:
mutate {
add_field => { "newFieldName" => "newFieldValue" }
}
Or to add a new field with the value from an existing field:
mutate {
add_field => { "newFieldName" => "%{existingField}" }
}
Considering an answer in the format:
{ "name": "SUBNET1", "cidr": "192.168.0.0/24" }
And the fact that you need to check for empty responses, you will also need to add conditionals, so your pipeline should be something like this example:
http {
url => "rest/api/subnet/check_subnet_from_ip/"
query => { "ip" => "%{[source][ip]}" }
verb => GET
target_body => api_response
}
json {
source => "api_response"
}
if [api_response][name]
{
mutate
{
add_field => { "[source][subnet][name]" => "%{[api_response][name]}" }
}
}
if [api_response][cidr]
{
mutate
{
add_field => { "[source][subnet][cidr]" => "%{[api_response][cidr]}" }
}
}
This will check if the fields name and cidr exists, and if it exists it will add new fields.
You can also rename the fields if you want, just use this mutate configuration instead.
mutate {
rename => { "name" => "subnet_name" }
}
Related
I am setting up Logstash to ingest Airflow logs. The following config is giving me the output I need:
input {
file {
path => "/my_path/logs/**/*.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if [path] =~ /\/my_path\/logs\/containers\/.*/ or [path] =~ /\/my_path\/logs\/scheduler\/.*/ {
drop{}
}
else {
grok {
"match" => [ "message", "\[%{TIMESTAMP_ISO8601:log_task_execution_datetime}\]%{SPACE}\{%{DATA:log_file_line}\}%{SPACE}%{WORD:log_level}%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}" ]
"remove_field" => [ "message" ]
}
date {
"match" => [ "log_task_execution_datetime", "ISO8601" ]
"target" => "log_task_execution_datetime"
"timezone" => "UTC"
}
dissect {
"mapping" => { "path" => "/my_path/logs/%{dag_id}/%{task_id}/%{dag_execution_datetime}/%{try_number}.%{}" }
"add_field" => { "log_id_template" => "{%{dag_id}}-{%{task_id}}-{%{dag_execution_datetime}}-{%{try_number}}" }
}
}
}
output {
stdout {codec => rubydebug{metadata => true}}
}
But I do not like having to specify the path "/my_path/logs/" multiple times.
In my input section, I tried to use:
add_field => { "[#metadata][base_path]" => "/my_path/logs/" }
and then, in the filter section:
if [path] =~ /[#metadata][base_path].*/ or [path] =~ /[#metadata][base_path].*/ {
drop{}
}
...
dissect {
"mapping" => { "path" => "[#metadata][base_path]%{dag_id}/%{task_id}/%{dag_execution_datetime}/%{try_number}.%{}" }
But it doesn't seem to work for the regex in the filter or in the dissect mapping. I get a similar issue when trying to use an environment variable as described here.
I have the - maybe naïve - notion that I should be able to use one variable for all references to the base path. Is there a way?
Using an environment variable in a conditional is not supported. There has been a github issue requesting it as an enhancement open since 2016. The workaround is to use mutate+add_field to add a field to [#metadata] then test that.
"mapping" => { "path" => "${[#metadata][base_path]}%{dag_id}/%{task_id} ...
should work. The terms in a conditional are not sprintf'd, so you cannot use %{}, but you can do a substring match. If FOO is set to /home/user/dir then
mutate { add_field => { "[#metadata][base_path]" => "${FOO}" } }
mutate { add_field => { "[path]" => "/home/user/dir/file" } }
if [#metadata][base_path] in [path] { mutate { add_field => { "matched" => true } } }
results in the [matched] field getting added. I do not know of a way to anchor the string match, so if FOO were set to /dir/ then that would also match.
I am new to Logstash manipulations and I have no idea how to do the below.
I have a sample data as below:
Column:Type
Incident Response P3
Incident Resolution L1.5 P2
...
I want to extract the word 'Response' and 'Resolution' into a new column 'SLA type'
Im looking for something very alike to the below SQL statement:
case when Type like '%Resolution%' then Resolution
when Type like '%Response%' then Response
end as SLA_Type
How do i manipulate this in Logstash?
Below is my conf. I'm using an API input.
input {
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
date {
match => ["[result][sys_created_on]","yyyy-MM-dd HH:mm:ss"]
target => "sys_created_on"
}
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "incidentsnow"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
The output for the API json url looks like the below:
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00",
"number":"INC0010275",
"category":"Network"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00",
"number":"INC0010567",
"category":"DB"}]}
You can use the following filter block in your pipeline to add a new field if a word is present in another field.
if "response" in [Type] {
mutate {
add_field => { "SLA_Type" => "Response" }
}
}
if "resolution" in [Type] {
mutate {
add_field => { "SLA_Type" => "Resolution" }
}
}
If the word response is present in the field Type a new field named SLA_Type with the value Response will be added to your document, the same in will happen with resolution.
I am currently using logstash, elasticsearch and kibana 6.3.0
My log are generated at a unique id path : /tmp/USER_DATA/FactoryContainer/images/(my unique id)/oar/oar_image_job(my unique id).stdout
What I want to do is to match this unique id and to create a field with this id.
I m a bit novice to logstash filter but I don't know why it doesn't want to use my uid and always return me %{uid} in my field or this Failed to execute action error.
my filter :
input {
file {
path => "/tmp/USER_DATA/FactoryContainer/images/*/oar/oar_image_job*.stdout"
start_position => "beginning"
add_field => { "data_source" => "oar-image-job" }
}
}
filter {
grok {
match => ["path","%{UNIXPATH}%{NUMBER:uid}%{UNIXPATH}"]
}
mutate {
add_field => [ "unique_id" => "%{uid}" ]
}
}
output {
if [data_source] == "oar-image-job" {
elasticsearch {
index => "oar-image-job-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
}
}
the data_source field is to avoid this issue: When you put multiple config files in a directory for Logstash to use, they will all be concatenated
in the grok debugger %{UNIXPATH}%{NUMBER:uid}%{UNIXPATH} my path return me the good value
link to the solution : https://discuss.elastic.co/t/cant-create-a-field-with-a-variable-from-a-grok-match-regex/142613/7?u=thesmartmonkey
the correct filter :
input {
file {
path => "/tmp/USER_DATA/FactoryContainer/images/*/oar/oar_image_job*.stdout"
start_position => "beginning"
add_field => { "data_source" => "oar-image-job" }
}
}
filter {
grok {
match => { "path" => [ "/tmp/USER_DATA/FactoryContainer/images/%{DATA:unique_id}/oar/oar_image_job%{DATA}.stdout" ] }
}
}
output {
if [data_source] == "oar-image-job" {
elasticsearch {
index => "oar-image-job-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
}
}
How to create a grok custom pattern filter in logstash?
I want to create a pattern for http response status code
here is my pattern code
STATUS_CODE __ %{NONNEGINT} __
what I reaaly want to do is to have all of my web server hits with user IP and request http headers and payload and also web servers's response.
and here is my logstash.conf
input {
file {
type => "kpi-success"
path => "/var/log/kpi_success.log"
start_position => beginning
}
}
filter {
if [type] == "kpi-success" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message} "}
}
multiline {
pattern => "^\["
what => "previous"
negate => true
}
mutate{
add_field => {
"statusCode" => "[STATUS_CODE]"
}
}
}
}
output {
if [type] == "kpi-success" {
elasticsearch {
hosts => "elasticsearch:9200"
index => "kpi-success-%{+YYYY.MM.dd}"
}
}
}
You don't have to use a custom pattern file, you can define a new one directly in the filter.
grok {
match => { "message" => "(?<STATUS_CODE>__ %{NONNEGINT} __)"}
}
I am parsing json log file in Logstash. There is a field named #person.name. I tried to rename this field name before sending it to elasticsearch. I also tried to remove the field but I couldn't remove or delete that field because of that my data not getting indexed in Elasticsearch.
Error recorded in elasticsearch
MapperParsingException[Field name [#person.name] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:196)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:308)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
My Logstash config
input {
beats {
port => 11153
}
}
filter
{
if [type] == "person_get" {
##Parsing JSON input to JSON Filter..
json {
source => "message"
}
mutate{
rename => { "#person.name" => "#person-name" }
remove_field => [ "#person.name"]
}
fingerprint {
source => ["ResponseTimestamp"]
target => "fingerprint"
key => "78787878"
method => "SHA1"
concatenate_sources => true
}
}
}
output{
if [type] == "person_get" {
elasticsearch {
index => "logstash-person_v1"
hosts => ["xxx.xxx.xx:9200"]
document_id => "%{fingerprint}" # !!! prevent duplication
}
stdout {
codec => rubydebug
}
} }