Logstash - adding a certain field with a certain value - logstash-grok

Here is an exmple of event message:
{
"timestamp":"2016-03-29T22:35:44.770750-0400",
"flow_id":45385792,
"in_iface":"eth1",
"event_type":"alert",
"src_ip":"3.3.3.8",
"src_port":21,
"dest_ip":"2.2.2.2",
"dest_port":52934,
"proto":"TCP",
"alert":{
"action":"allowed",
"gid":1,
"signature_id":4027,
"rev":0,
"signature":"FTP Successful Login",
"category":"",
"severity":3
},
"payload":"MjU3ICIvaG9tZS9uZXd1c2VyIg0K",
"payload_printable":"257 newuser",
"stream":0,
"packet":"AFBWo0NoAFBWoxZWCABFAABJKDpAAEAGCGcDAwMIAgICAgAVzsbd4MhqOBOjfoAYAOMYcwAAAQEIChHN4EQHnwugMjU3ICIvaG9tZS9uZXd1c2VyIg0K"
}
And I'd like to be able to identify the string "newuser" (comes always after the number "257") and to create another field named user, and to add the "newuser" string into it.
My Logstash config file is the following:
input
beats
port => 5044
codec => json
type => "SuricataIDPS"
output
elasticsearch
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
#document_type => "%{[#metadata][type]}"
How can I extract the "newuser" string and to add a new field with that value?

You have to define filter stanza in your config with grok filter, see below:
input {
beats {
port => 5044
codec => json
type => "SuricataIDPS"
}
}
filter {
grok {
match => ["payload_printable", "257 (?<user>.+)"]
tag_on_failure => []
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
#document_type => "%{[#metadata][type]}"
}
}

Related

Logstash config with filebeat issue when using both beats and file input

I am trying to config a filebeat with logstash. At the moment I managed to successfully config filebeat with logstash and I am running into same issues when creating multiple conf files in the logstash.
So currently I have one filebeats input which is something like :
input {
beats {
port => 5044
}
}
filter {
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
pipeline => "%{[#metadata][pipeline]}"
}}
else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "systemsyslogs"
}
}}
And a file Logstash config which is like :
input {
file {
path => "/var/log/foldername/number.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "numberlogtest"
}
}
The grok filter is working as I successfully managed to create 2 index patterns in kibana and view the data correctly.
The problem is that when I am running logstash with both configs applied, logstash is fetching the data from number.log multiple times and logstash plain logs are getting lots of warning, therefore using a lot of computing resources and CPU is going over 80% ( this is an oracle instance ). If I remove the file config from logstash the system is running properly.
I managed to run logstash with each one of these config files applied individually, but not both at once.
I already added an exception in the filebeats config :
exclude_files:
- /var/log/foldername/*.log
Logstash plain logs when running both config files:
[2023-02-15T12:42:41,077][WARN ][logstash.outputs.elasticsearch][main][39aca10fa204f31879ff2b20d5b917784a083f91c2eda205baefa6e05c748820] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"numberlogtest", :routing=>nil}, {"service"=>{"type"=>"system"}
"caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:607"}}}}}
I already added an exception in the filebeat config :
exclude_files:
- /var/log/foldername/*.log
Fixed by creating a single logstash config with both inputs :
input {
beats {
port => 5044
}
file {
path => "**path**"
start_position => "beginning"
}
}
filter {
if [path] == "**path**" {
grok {
match => { "message" => "%{WORD:username} %{INT:number} %{TIMESTAMP_ISO8601:timestamp}" }
}
}
}
output {
if [#metadata][pipeline] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
pipeline => "%{[#metadata][pipeline]}"
}
} else {
if [path] == "**path**" {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index2"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index1"
}
}
}
}

How to map array inside message in Logstash HTTP Output

I am using Logstash to update by query existing Elasticsearch documents with an additional field that contains aggregate values extracted from Potgresql table.
I use elastichsearch output to load one index using document_id and http output to update another index that have different document_id but receving errors:
[2023-02-08T17:58:12,086][ERROR][logstash.outputs.http ][main][b64f19821b11ee0df1bd165920785876cd6c5fab079e27d39bb7ee19a3d642a4] [HTTP Output Failure] Encountered non-2xx HTTP code 400 {:response_code=>400, :url=>"http://localhost:9200/medico/_update_by_query", :event=>#LogStash::Event:0x19a14c08}
This is my pipeline configuration:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:postgresql://handel:5432/mydb"
statement_filepath => "D:\ProgrammiUnsupported\logstash-7.15.2\config\nota_sede.sql"
}
}
filter {
aggregate {
task_id => "%{idCso}"
code => "
map['idCso'] = event.get('idCso')
map['noteSede'] ||= []
map['noteSede'] << {
'id' => event.get('idNota'),
'tipo' => event.get('tipoNota'),
'descrizione' => event.get('descrizione'),
'data' => event.get('data'),
'dataInizio' => event.get('dataInizio'),
'dataFine' => event.get('dataFine')
}
event.cancel()"
push_previous_map_as_event => true
timeout => 60
timeout_tags => ['_aggregatetimeout']
}
}
}
output {
stdout { codec => rubydebug { metadata => true } }
# this works
elasticsearch {
hosts => "https://localhost:9200"
document_id => "STRUTTURA_%{idCso}"
index => "struttura"
action => "update"
user => "user"
password => "password"
ssl => true
cacert => "/usr/share/logstash/config/ca.crt"
}
http {
url => "http://localhost:9200/medico/_update_by_query"
user => "elastic"
password => "changeme"
http_method => "post"
format => "message"
content_type => "application/json"
message => '{
"query":{
"term":{
"idCso":"%{idCso}"
}
},
"script":{
"source":"ctx._source.noteSede=params.noteSede",
"lang":"painless",
"params":{
"noteSede":"%{noteSede}"
}
}
}
}'
}
}
The stdout output show me the sended docs to output like this:
{
"query" => {
"term" => {
"idCso" => "859119"
}
},
"script" => {
"source" => "ctx._source.noteSede=params.noteSede",
"lang" => "painless",
"params" => {
"noteSede" => "{dataFine=null, dataInizio=2020-02-13, descrizione=?, tipo=DB, id=6390644, data=2020-02-13 12:26:58.409},{dataFine=null, dataInizio=2020-02-13, descrizione=?, tipo=DE, id=6390645, data=2020-02-13 12:26:58.41}"
}
}
}
}
How could I set noteSede array field into message to _update_by_query ?

Logstash Multiple index based on multiple path

I'm using following configuration file for Logstash to create multiple indices, but they are not visible in Kibana. Logs are parsed, but index is not created. What do I need to change for this to work?
input {
stdin{
type => "stdin-type"
}
file{
tags => ["prod"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/prod/*.log"]
}
file{
tags => ["dev"]
type => ["json"]
path => ["C:/Users/DELL/Downloads/log/test/*.log"]
}
}
output {
stdout {
codec => rubydebug
}
if "prod" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["prod-log"]
}
}
if "dev" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => ["dev-log"]
}
}
}

Logstash not replacing "%type" with value

Hello I have this configuration for a logstash running on my computer :
input {
exec {
command => "powershell -executionpolicy unrestricted -f scripts/windows/process.ps1 command logstash"
interval => 30
type => "process_data"
codec => line
tags => [ logstash" ]
}
}
output
{
if "sometype-logs" in [tags] {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "sometype-logs-%{+YYYY.MM.dd}"
hosts => "locahost:9200"
template_overwrite => true
}
} else {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "%{type}"
hosts => "localhost:9200"
template_overwrite => true
}
}
When displaying indexes I have :
Why is index name is "%type" and not "process_data" ?
Probably just something about syntax. To used some field of the data, you must use this syntax
%{[somefield]}
(see example on this documentation page)
So, in your case, try this :
"%{[type]}"
in place of
"%{type}"

Issue in renaming Json parsed field in Logstash

I am parsing json log file in Logstash. There is a field named #person.name. I tried to rename this field name before sending it to elasticsearch. I also tried to remove the field but I couldn't remove or delete that field because of that my data not getting indexed in Elasticsearch.
Error recorded in elasticsearch
MapperParsingException[Field name [#person.name] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:276)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:196)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:308)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:221)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:138)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:119)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:100)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:435)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:458)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
My Logstash config
input {
beats {
port => 11153
}
}
filter
{
if [type] == "person_get" {
##Parsing JSON input to JSON Filter..
json {
source => "message"
}
mutate{
rename => { "#person.name" => "#person-name" }
remove_field => [ "#person.name"]
}
fingerprint {
source => ["ResponseTimestamp"]
target => "fingerprint"
key => "78787878"
method => "SHA1"
concatenate_sources => true
}
}
}
output{
if [type] == "person_get" {
elasticsearch {
index => "logstash-person_v1"
hosts => ["xxx.xxx.xx:9200"]
document_id => "%{fingerprint}" # !!! prevent duplication
}
stdout {
codec => rubydebug
}
} }

Resources