Issue in renaming Json parsed field in Logstash - logstash

I am parsing json log file in Logstash. There is a field named #person.name. I tried to rename this field name before sending it to elasticsearch. I also tried to remove the field but I couldn't remove or delete that field because of that my data not getting indexed in Elasticsearch.
Error recorded in elasticsearch
MapperParsingException[Field name [#person.name] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:196)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:308)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
My Logstash config
input {
beats {
port => 11153
}
}
filter
{
if [type] == "person_get" {
##Parsing JSON input to JSON Filter..
json {
source => "message"
}
mutate{
rename => { "#person.name" => "#person-name" }
remove_field => [ "#person.name"]
}
fingerprint {
source => ["ResponseTimestamp"]
target => "fingerprint"
key => "78787878"
method => "SHA1"
concatenate_sources => true
}
}
}
output{
if [type] == "person_get" {
elasticsearch {
index => "logstash-person_v1"
hosts => ["xxx.xxx.xx:9200"]
document_id => "%{fingerprint}" # !!! prevent duplication
}
stdout {
codec => rubydebug
}
} }

Related

Logstash Multiple index based on multiple path

I'm using following configuration file for Logstash to create multiple indices, but they are not visible in Kibana. Logs are parsed, but index is not created. What do I need to change for this to work?
input {
stdin{
type => "stdin-type"
}
file{
tags => ["prod"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/prod/*.log"]
}
file{
tags => ["dev"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/test/*.log"]
}
}
output {
stdout {
codec => rubydebug
}
if "prod" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["prod-log"]
}
}
if "dev" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["dev-log"]
}
}
}

Logstash configuration for word extraction

I am new to Logstash manipulations and I have no idea how to do the below.
I have a sample data as below:
Column:Type
Incident Response P3
Incident Resolution L1.5 P2
...
I want to extract the word 'Response' and 'Resolution' into a new column 'SLA type'
Im looking for something very alike to the below SQL statement:
case when Type like '%Resolution%' then Resolution
when Type like '%Response%' then Response
end as SLA_Type
How do i manipulate this in Logstash?
Below is my conf. I'm using an API input.
input {
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
date {
match => ["[result][sys_created_on]","yyyy-MM-dd HH:mm:ss"]
target => "sys_created_on"
}
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "incidentsnow"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
The output for the API json url looks like the below:
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00",
"number":"INC0010275",
"category":"Network"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00",
"number":"INC0010567",
"category":"DB"}]}
You can use the following filter block in your pipeline to add a new field if a word is present in another field.
if "response" in [Type] {
mutate {
add_field => { "SLA_Type" => "Response" }
}
}
if "resolution" in [Type] {
mutate {
add_field => { "SLA_Type" => "Resolution" }
}
}
If the word response is present in the field Type a new field named SLA_Type with the value Response will be added to your document, the same in will happen with resolution.

Can't create a field with a variable from a grok match regex

I am currently using logstash, elasticsearch and kibana 6.3.0
My log are generated at a unique id path : /tmp/USER_DATA/FactoryContainer/images/(my unique id)/oar/oar_image_job(my unique id).stdout
What I want to do is to match this unique id and to create a field with this id.
I m a bit novice to logstash filter but I don't know why it doesn't want to use my uid and always return me %{uid} in my field or this Failed to execute action error.
my filter :
input {
file {
path => "/tmp/USER_DATA/FactoryContainer/images/*/oar/oar_image_job*.stdout"
start_position => "beginning"
add_field => { "data_source" => "oar-image-job" }
}
}
filter {
grok {
match => ["path","%{UNIXPATH}%{NUMBER:uid}%{UNIXPATH}"]
}
mutate {
add_field => [ "unique_id" => "%{uid}" ]
}
}
output {
if [data_source] == "oar-image-job" {
elasticsearch {
index => "oar-image-job-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
}
}
the data_source field is to avoid this issue: When you put multiple config files in a directory for Logstash to use, they will all be concatenated
in the grok debugger %{UNIXPATH}%{NUMBER:uid}%{UNIXPATH} my path return me the good value
link to the solution : https://discuss.elastic.co/t/cant-create-a-field-with-a-variable-from-a-grok-match-regex/142613/7?u=thesmartmonkey
the correct filter :
input {
file {
path => "/tmp/USER_DATA/FactoryContainer/images/*/oar/oar_image_job*.stdout"
start_position => "beginning"
add_field => { "data_source" => "oar-image-job" }
}
}
filter {
grok {
match => { "path" => [ "/tmp/USER_DATA/FactoryContainer/images/%{DATA:unique_id}/oar/oar_image_job%{DATA}.stdout" ] }
}
}
output {
if [data_source] == "oar-image-job" {
elasticsearch {
index => "oar-image-job-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
}
}

Logstash input filename as output elasticsearch index

Is there a way of having the filename of the file being read by logstash as the index name for the output into ElasticSearch?
I am using the following config for logstash.
input{
file{
path => "/logstashInput/*"
}
}
output{
elasticsearch{
index => "FromfileX"
}
}
I would like to be able to put a file e.g. log-from-20.10.2016.log and have it indexed into the index log-from-20.10.2016. Does the logstash input plugin "file" produce any variables for use in the filter or output?
Yes, you can use the path field for that and grok it to extract the filename into the index field
input {
file {
path => "/logstashInput/*"
}
}
filter {
grok {
match => ["path", "(?<index>log-from-\d{2}\.\d{2}\.\d{4})\.log$" ]
}
}
output{
elasticsearch {
index => "%{index}"
}
}
input {
file {
path => "/home/ubuntu/data/gunicorn.log"
start_position => "beginning"
}
}
filter {
grok {
match => {
"message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\""
remove_field => ["message"]
}
}
date {
match => ["http_date", "dd/MMM/yyyy:HH:mm:ss +ssss"]
}
ruby {
code => "event.set('index_name',event.get('path').split('/')[-1].gsub('.log',''))"
}
}
output {
elasticsearch {
hosts => ["0.0.0.0:9200"]
index => "%{index_name}-%{+yyyy-MM-dd}"
user => "*********************"
password => "*****************"
}
stdout { codec => rubydebug }
}

logstash nil import errors

I'm getting some errors attempting to do a data import in logstash. I'm seeing it for every "geo" field that I have. Here are some of my config files
input {
jdbc {
jdbc_driver_library => "c:\binaries\driver\ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#random:1521/random"
jdbc_user => "user"
jdbc_password => "password"
statement => "select a.*, myfunc() as geo from foo a"
type => "sometype"
}
}
filter{
if [type] == "sometype" {
mutate {
rename => { "sometype_id" => "id" }
remove_field => ["gdo_geometry"]
add_field => [ "display", "%{id}" ]
}
# parses string to json
json{
source => "geo"
target => "geometry"
}
}
}
output {
if [type] == "sometype" {
elasticsearch {
hosts => ["myesbox:80"]
document_id => "%{id}"
index => "sjw"
}
}
}
Here is a second.
input {
jdbc {
jdbc_driver_library => "c:\binaries\driver\ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#random:1521/random"
jdbc_user => "user"
jdbc_password => "password"
statement => "select a.*, myfunc() as geo from foo2 a"
type => "sometype2"
}
}
filter{
if [type] == "sometype2" {
mutate {
rename => { "sometype2_id" => "id" }
remove_field => ["gdo_geometry"]
add_field => [ "display", "%{id}" ]
}
# parses string to json
json{
source => "geo"
target => "geometry"
}
}
}
output {
if [type] == "sometype2" {
elasticsearch {
hosts => ["myesbox:80"]
document_id => "%{id}"
index => "sjw"
}
}
}
And here is the error message (repeated once for each record in my database tables).
{:timestamp=>"2016-01-05T13:33:18.258000-0800", :message=>"Trouble parsing json", :source=>"geo", :raw=>nil, :exception=>java.lang.ClassCastException: org.jruby.RubyNil cannot be cast to org.jruby.RubyIO, :level=>:warn}
Now interestingly, the field DOES seem to import successfully. I can see the data populated as expected. But I don't know why this warning is being generated. I'm running the logstash as
logstash -f /my/logstash/dir
Also interesting to note is that if I modify the first config file given and changed the source json filter name to "geom" instead of "geo" -- this warning would no longer occur. It seems to only occur when I have multiple config files with the same field/json filter combinations. So if I then added a third config file and it had a "geo" field being parsed by the json filter -- the issue occurs again -- though I would still not see any warning messages for the first config file -- only the second and third.
The issue here actually turned out to be a bug with the 2.0 version of logstash. I'm not sure what exactly the problem was, but upgrading to 2.1 resolved the issue for me.

Resources