In my Logstash I have below configuration:
filter {
mutate {
add_field => {
"doclength" => "%{size}"
}
convert => {"doclength" => "integer"}
remove_field => ["size"]
}
}
I intend to store the field "doclength" into ElasticSearch as an integer. But somehow in ES, it shows mapping as "string" only.
Not sure what I am missing in here, the expected behavior is not matching up with the actual one.
Try this one, it worked on my machine.
filter {
mutate {
convert => {"size" => "integer"}
rename => { "size" => "doclength" }
}
}
Related
I have a date in my logs like below formats,
YYYY-M-dd and YYYY-MM-d and YYYY-M-d
2020-9-21
2020-11-1
2020-9-1
date filter plugin match with
date {
match => [ "event_date" ,"yyyy-MM-dd"]
}
Some logs I get date parse exception because of this. Is it possible to match all of these. I means match this format if not match another date format.
The error is
"failed to parse field [event_date] of type [date] in document with id '...'. Preview of field's value: '2017-11-2'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [2017-11-2] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"date_time_parse_exception: Failed to parse with all enclosed parsers"}}}}}}
How can i solve it ? Thanks for answering
One of a solution is to have a mechanism like a switch implemented by the date filter with the tag_on_failure value. It looks like this :
filter{
date {
match => [ "event_date" ,"yyyy-MM-dd"]
tag_on_failure => [ "not_format_date1"]
}
if "not_format_date1" in [tags] {
date {
match => [ "event_date" ,"yyyy-MM-d"]
tag_on_failure => [ "not_format_date2"]
}
}
if "not_format_date2" in [tags] {
date {
match => [ "event_date" ,"yyyy-M-d"]
tag_on_failure => [ "no_format"]
}
}
}
I have tried first answer but didn't solve my issue. #YLR's way also good way to improve.
I have solved my question with changing fields like M to MM with if conditions. Below is an example.
if [monthday] == "1"{
mutate {
update => { "monthday" => "01" }
}
}else if [monthday] == "2"{
mutate {
update => { "monthday" => "02" }
}
}else if [monthday] == "3"{
mutate {
update => { "monthday" => "03" }
}
}
....
That solved my question but its little bit hard way.
i am currently working on a project with the Elastic stack for a log monitoring system. The logs i have to load are in a specific format so i have to write my own logstash scripts to read them. In particular one type of logs where i have a date in the start of the file and the timestamp in each of the other lines has no date, my goal is to extract the date from the first line and add it to all the next ones, after some research i found that the aggregate filter can help but i can't get it to work, here is my config file :
input
{
file {
path => "F:/ELK/data/testFile.txt"
#path => "F:/ELK/data/*/request/*"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter
{
mutate {
add_field => { "taskId" => "all" }
}
grok
{
match => {"message" => "-- %{NOTSPACE} %{NOTSPACE}: %{DAY}, %{MONTH:month} %{MONTHDAY:day}, %{YEAR:year}%{GREEDYDATA}"}
tag_on_failure => ["not_date_line"]
}
if "not_date_line" not in [tags]
{
mutate{
replace => {'taskId' => "%{day}/%{month}/%{year}"}
remove_field => ["day","month","year"]
}
aggregate
{
task_id => "%{taskId}"
code => "map['taskId'] = event.get('taskId')"
map_action => "create"
}
}
else
{
dissect
{
mapping => { message => "%{sequence_index} %{time} %{pid} %{puid} %{stack_level} %{operation} %{params} %{op_type} %{form_event} %{op_duration}"}
}
aggregate {
task_id => "%{taskId}"
code => "event.set('taskId', map['taskId'])"
map_action => "update"
timeout => 0
}
mutate
{
strip => ["op_duration"]
replace => {"time" => "%{taskId}-%{time}"}
}
}
mutate
{
remove_field => ['#timestamp','host','#version','path','message','tags']
}
}
output
{
stdout{}
}
the scripts reads the date correctly but then doesn't work to replace the value in the other events :
{
"taskId" => "22/October/2020"
}
{
"pid" => "45",
"sequence_index" => "10853799",
"op_type" => "1",
"time" => "all-16:23:29:629",
"params" => "90",
"stack_level" => "0",
"op_duration" => "",
"operation" => "10",
"form_event" => "0",
"taskId" => "all",
"puid" => "1724"
}
I am using only one worker to ensure the order of the events is kept intact , if you know of any other way to achieve this i'm open to suggestions, thank you !
For the lines which have a date you are setting the taskId to "%{day}/%{month}/%{year}", for the rest of the lines you are setting it to "all". The aggregate filter will not aggregate across events with different task ids.
I suggest you use a constant taskId and store the date in some other field, then in a single aggregate filter you can use something like
code => '
date = event.get("date")
if date
#date = date
else
event.set("date", #date)
end
'
#date is an instance variable, so its scope is limited to that aggregate filter, but it is preserved across events. It is not shared with other aggregate filters (that would require a class variable or a global variable).
Note that you require event order to be preserved, so you should set pipeline.workers to 1.
Thanks to #Badger and some other post he answered on the elastic forum, i found a solution using a single ruby filter and an instance variable, couldn't get it to work with the aggregate filter but that is not an issue for me.
ruby
{
init => '#date = ""'
code => "
event.set('date',#date) unless #date.empty?
#date = event.get('date') unless event.get('date').empty?
"
}
i have an issue using logstash mutate filter gsub.
Required
Remove "ZC" characters of a field and coverting it into float
{
"field" => "12.343,40ZC",
"#timestamp" => 2020-01-06T23:00:00.000Z
}
Expected output
{
"field" => "-12343,40",
"#timestamp" => 2020-01-06T23:00:00.000Z
}
Code not working
filter{
if "ZC" in "field" {
mutate { gsub => ["field","ZC",""] }
}
}
Code working
filter{
mutate { gsub => ["field","ZC",""] }
}
I need the "if" statement because depends if the two characters exist inside the field to make a positive or negative float.
Your conditional is wrong, if you use "field" logstash understands that as a string with the value field, the correct way is to use the format [field].
Change your conditional to the following.
filter {
if "ZC" in [field] {
mutate { gsub => ["field","ZC",""] }
}
}
I am doing a split on two fields, and assigning different array elements to new fields. However when they dont exist it ends up assinging the code to the field, e.g"%{variable}"
I assume I could do 5 if statements on the array element to see if its present before assigning it to the new field, but this seems a very messy way of doing it. Is there a better way to only assign if populated
split => { "HOSTALIAS" => ", " }
split => { "HOSTGROUP" => "," }
add_field => {
"host-group" => "%{[HOSTGROUP][0]}"
"ci_alias" => "%{[HOSTALIAS][0]}"
"blueprint-id" => "%{[HOSTALIAS][1]}"
"instance-id" => "%{[HOSTALIAS][2]}"
"vm-location" => "%{[HOSTALIAS][3]}"
}
You could use grok filter. Here we drop failing messages but we could deal with it differently.
filter {
grok {
match => [ "HOSTALIAS", "%{WORD:ci_alias},%{WORD:blueprint-id},%{WORD:instance-id},%{WORD:vm-location}"]
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
I have a file named "Job Code.txt"
job_id=0001,description=Ship data from server to elknode1,result=OK
job_id=0002,description=Ship data from server to elknode2,result=Error: Msg...
job_id=0003,description=Ship data from server to elknode3,result=OK
job_id=0004,description=Ship data from server to elknode4,result=OK
Here is the filter part of my .conf file but it doesn't work. How can I created new field, i.e. jobID, description, result as to be seen in kibana
filter{
grok{ match => {"message" => ["JobID: %{NOTSPACE:job_id}","description: %{NOTSPACE:description}","result: %{NOTSPACE:message}"]}
add_field => {
"JobID" => "%{job_id}"
"Description" => "%{description}"
"Message" => "%{message}"
}
}
if [job_id] == "0001" {
aggregate {
task_id => "%{job_id}"
code => "map['time_elasped']=0"
map_action => "create"
}
}
if [job_id] == "0003" {
aggregate {
task_id => "%{job_id}"
code => "map['time_elasped']=0"
map_action => "update"
}
}
if [job_id] == "0002" {
aggregate {
task_id => "%{job_id}"
code => "map['time_elasped']=0"
map_action => "update"
}
}
I know this is a couple days old, perhaps you still require an answer. Change your grok statement to:
grok {
match => { "message" => "job_id=%{DATA:job_id},description=%{DATA:description},result=%{GREEDYDATA:message}" }
}
You won't need the add_field option, grok will create them for you. The add_field option is to add arbitrary fields. Check the pattern at https://grokdebug.herokuapp.com
Also, unless there are other messages you want to match, I don't think the aggregate statements you have will do what you want.