ConfigurationError in Logstash - logstash

When I run this configuration file:
input {
file {
path => "/tmp/linuxServerHealthReport.csv"
start_position => "beginning"
sincedb_path => "/home/infra/logstash-7.14.1/snowdb/health_check"
}
codec => multiline {
pattern => "\""
negate => true
what => previous
}
}
filter {
csv {
columns => ["Report_timestamp","Hostname","OS_Relese","Server_Uptime","Internet_Status","Current_CPU_Utilization","Current_Memory_Utilization","Current_SWAP_Utilization","FS_Utilization","Inode_Utilization","FS_Read_Only_Mode_Status","Disk_Multipath_Status","Besclient_Status","Antivirus_Status","Cron_Service_Status","Nagios_Status","Nagios_Heartbest_Status","Redhat_Cluster_Status"]
separator => ","
skip_header => true
}
mutate {
remove_field => ["path", "host"]
}
skip_empty_columns => true
skip_empty_row => true
}
# quote_char => "'"
output {
stdout { codec => rubydebug }
}
I get this error:
Error:
[2021-09-22T15:57:04,929][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "{" at line 7, column 9 (byte 226) after input {\n file {\n\t\tpath => "/tmp/linuxServerHealthReport.csv"\n start_position => "beginning"\n sincedb_path => "/home/imiinfra/logstash-7.14.1/snowdb/health_check"\n }\n\t\tcodec ", :backtrace=>["/home/imiinfra/logstash-7.14.1/logstash-core/lib/logstash/compiler.rb:32:in compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in initialize'", "/home/imiinfra/logstash-7.14.1/logstash-core/lib/logstash/java_pipeline.rb:47:in initialize'", "/home/imiinfra/logstash-7.14.1/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "/home/imiinfra/logstash-7.14.1/logstash-core/lib/logstash/agent.rb:391:in block in converge_state'"]}

You have to work on your formatting, but this is what I have reconstructed from the question. Main issues seem to be parameters of file, you put codec outside of file for some reason. The other issue is csv parameters skip_empty_columns and skip_empty_row which were also outside of csv.
So I did a little bit of formatting and fixed those issues and it should work now.
input {
file {
path => "/tmp/linuxServerHealthReport.csv"
codec => multiline {
pattern => "\""
negate => true
what => previous
}
start_position => "beginning"
sincedb_path => "/home/infra/logstash-7.14.1/snowdb/health_check"
}
}
filter {
csv {
columns => ["Report_timestamp","Hostname","OS_Relese","Server_Uptime","Internet_Status","Current_CPU_Utilization","Current_Memory_Utilization","Current_SWAP_Utilization","FS_Utilization","Inode_Utilization","FS_Read_Only_Mode_Status","Disk_Multipath_Status","Besclient_Status","Antivirus_Status","Cron_Service_Status","Nagios_Status","Nagios_Heartbest_Status","Redhat_Cluster_Status"]
separator => ","
skip_header => true
skip_empty_columns => true
skip_empty_row => true
}
mutate {
remove_field => ["path", "host"]
}
}
output {
stdout { codec => rubydebug }
}

Related

Grok json parser with \n in it

I have a json log message as below :
{\n \"jobId\": \"12030845\",\n \"publicationId\": \"hpg01\",\n \"startDateTime\": \"2022-08-03T14:38:49.833\",\n \"endDateTime\": \"2022-08-03T14:48:55.420\",\n \"jobName\": \"import\",\n \"numberInputDocs\": \"12925\",\n \"numberOutputDocs\": \"12925\",\n \"numberUCMDocs\": \"1159\",\n \"state\": \"success\",\n \"numberDocErrors\": \"0\"\n}
And I need to parse/convert this into a key/value pair. I am using logstash and grok to parse it.
My logstash.conf is as follows :
input {
file {
codec => multiline {
pattern => '^\n'
negate => true
what => previous
}
path => "C:/logs/gaimport.log"
}
}
filter {
mutate
{
replace => [ "message", "%{message}}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
output {
stdout { codec => rubydebug }
}

Grok pattern for log with pipeline seperator

I'm trying to figure out the log pattern for the log pattern below.
01/02AVDC190001|00001|4483850000152971|DATAPREP|PREPERATION/ENRICHEMENT |020190201|20:51:52|SCHED
What I've tried so far is :
input {
file {
path => "C:/Elasitcity/Logbase/July10_Logs_SDC/*.*"
start_position => "beginning"
sincedb_path => "NUL"
}
}
filter {
mutate {
gsub => ["message","\|"," "]
}
grok {
match => ["message","%{NUMBER:LOGID} %{NUMBER:LOGPHASE} %{NUMBER:LOGID} %{WORD:LOGEVENT} %{WORD:LOGACTIVITY} %{DATE_US: DATE} %{TIME:LOGTIME}"]
}
}
}
output {
elasticsearch {
hosts => "localhost"
index => "grokcsv"
document_type => "gxs"
}
stdout {}
}
I'm also wondering if its possible to combine the data and time since its seperated by a pipeline character. But that's not the primary question,.

Logstash input line by line

How can i read files in logstash line by line using codec?
When i tried the below configuration but it is not working:
file {
path => "C:/DEV/Projects/data/*.csv"
start_position => "beginning"
codec => line {
format => "%{[data]}"
}
Example of configuration with elasticsearch in the output:
input{
file {
path => "C:/DEV/Projects/data/*.csv"
start_position => beginning
}
}
filter {
csv {
columns => [
"COLUMN_1",
"COLUMN_2",
"COLUMN_3",
.
.
"COLUMN_N"
]
separator => ","
}
mutate {
convert => {
"COLUMN_1" => "float"
"COLUMN_4" => "float"
"COLUMN_6" => "float"
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
action => "index"
index => "test_index"
}
For filter :
https://www.elastic.co/guide/en/logstash/current/plugins-filters-csv.html

String conversion of special character

JSON parsing exception is thrown from logstash whenever ® is encountered.
I need to convert ® into its equivalent HTML encoded value and then push it into ES through logstash.
I got few articles where its mentioned how to convert the HTML codes into equivalent symbols but am looking for the reverse case.
If I pass "®" then it should return me ® but if ® is passed then it should not format it and still it should return ®
Update:
Below is the script i am using to push data into ES
input{ file { path => ["C:/input.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
}}filter{
mutate
{
replace => [ "message", "%{message}" ]
gsub => [ 'message','\n','']
}
json { source => message }
mutate{
remove_field => ["message"]
}}output {
elasticsearch {
hosts => ["localhost:9200"]
index => "index"
document_type => "type"
document_id => "%{id}"
}
stdout { codec => rubydebug }
}
How can i solve this issue

Logstash input filename as output elasticsearch index

Is there a way of having the filename of the file being read by logstash as the index name for the output into ElasticSearch?
I am using the following config for logstash.
input{
file{
path => "/logstashInput/*"
}
}
output{
elasticsearch{
index => "FromfileX"
}
}
I would like to be able to put a file e.g. log-from-20.10.2016.log and have it indexed into the index log-from-20.10.2016. Does the logstash input plugin "file" produce any variables for use in the filter or output?
Yes, you can use the path field for that and grok it to extract the filename into the index field
input {
file {
path => "/logstashInput/*"
}
}
filter {
grok {
match => ["path", "(?<index>log-from-\d{2}\.\d{2}\.\d{4})\.log$" ]
}
}
output{
elasticsearch {
index => "%{index}"
}
}
input {
file {
path => "/home/ubuntu/data/gunicorn.log"
start_position => "beginning"
}
}
filter {
grok {
match => {
"message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\""
remove_field => ["message"]
}
}
date {
match => ["http_date", "dd/MMM/yyyy:HH:mm:ss +ssss"]
}
ruby {
code => "event.set('index_name',event.get('path').split('/')[-1].gsub('.log',''))"
}
}
output {
elasticsearch {
hosts => ["0.0.0.0:9200"]
index => "%{index_name}-%{+yyyy-MM-dd}"
user => "*********************"
password => "*****************"
}
stdout { codec => rubydebug }
}

Resources