[ERROR][logstash.outputs.jdbc ] Unknown setting 'driver_library' for jdbc - logstash

I want to Add My CSV File to postgreSQL.but whenever i try to connect it Show Error:
[ERROR][logstash.outputs.jdbc ] Unknown setting 'driver_library' for jdbc
it is My Config.conf File
input {
file {
path => "C:/Users/Desktop/Input.csv"
start_position => "beginning"
codec => plain
}
}
filter {
csv {
separator => ","
columns => ["Column","Metric","Source_Table","Output_Column_Alias","Method"]
}
}
output {
jdbc {
connection_string => "jdbc:postgresql://hostname:5432/Database"
username => "User"
password => "Password"
driver_library => "C:/Users/lib/postgresql-42.5.1.jar"
driver_class => "org.postgresql.Driver"
statement => "INSERT INTO CSV_to_Postgresql (Column,Metric,Source_Table,Output_Column_Alias,Method) VALUES (?, ?, ?, ?, ?)"
}
}```

Use driver_jar_path, not driver_library. The elastic supported plugins use jdbc_driver_library as the name of this option, but the jdbc output is a third party supported plugin which uses different conventions.

Related

jdbc_static: Getting a sql syntax error I did'nt write

With logstash I am trying to Extract some tables, Transform them locally on the logstash mashine, and then Load the result to ElasticSearch. The reason for this solution is due to very limited computing power on the source server, a MariaDB.
I have tested the input{} separately, it works, so the connection to the mariadb is sound.
I have tested the jdbc_static filter against a microsoftSQL server. So logstash has writing privileges in is current environment.
I have tested the SQL syntax on the MariaDB server directly
I'm running logstash 6.8 and java 8 (java version "1.8.0_211")
I have tried earlier versions of mariadb jdbc connection
(mariadb-java-client-2.4.2.jar, mariadb-java-client-2.2.6-sources,
mariadb-java-client-2.3.0-sources)
My config file
input {
jdbc {
jdbc_driver_library => "C:/Logstash/logstash-6.8.0/plugin/mariadb-java-client-2.4.2.jar"
jdbc_driver_class => "Java::org.mariadb.jdbc.Driver"
jdbc_connection_string => "jdbc:mariadb://xx.xx.xx
jdbc_user => "me"
jdbc_password => "its secret"
schedule => "* * * * *"
statement => "SELECT unqualifiedversionid__ FROM AuditEventFHIR WHERE myUnqualifiedId = '0000134b-fc7f-4c3a-b681-8150068d6dbb'"
}
}
filter {
jdbc_static {
loaders => [
{
id => "auditevent"
query => "SELECT
myUnqualifiedId
,unqualifiedversionid__
,type_
FROM AuditEventFHIR
where myUnqualifiedId = '0000134b-fc7f-4c3a-b681-8150068d6dbb'
"
local_table => "l_ae"
}
]
local_db_objects => [
{
name => "l_ae"
index_columns => ["myUnqualifiedId"]
columns => [
["myUnqualifiedId", "varchar(256)"],
["unqualifiedversionid__", "varchar(24)"],
["type_", "varchar(256)"]
]
}
]
local_lookups => [
{
id => "rawlogfile"
query => "
select myUnqualifiedId from l_ae
"
target => "sql_output"
}
]
jdbc_driver_library => "C:/Logstash/logstash-6.8.0/plugin/mariadb-java-client-2.4.2.jar"
jdbc_driver_class => "Java::org.mariadb.jdbc.Driver"
jdbc_connection_string => "jdbc:mariadb://xx.xx.xx.xx"
jdbc_user => "me"
jdbc_password => "its secret"
}
}
output {
stdout { codec => rubydebug }
}
I am getting this and several other errors, but I suspect fixing the first will fix the rest. But key is that no were in my code are the words "LIMIT 1"
[ERROR][logstash.filters.jdbc.readonlydatabase] Exception occurred when executing loader Jdbc query count {:exception=>"Java::JavaSql::SQLSyntaxErrorException: (conn=1490) You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '\"T1\" LIMIT 1' at line 8", :backtrace=>["org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(org/mariadb/jdbc/internal/util/exceptions/ExceptionMapper.java:242)", "org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(org/mariadb/jdbc/internal/util/exceptions/ExceptionMapper.java:171)", "org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(org/mariadb/jdbc/MariaDbStatement.java:248)", "org.mariadb.jdbc.MariaDbStatement.executeInternal(org/mariadb/jdbc/MariaDbStatement.java:338)", "org.mariadb.jdbc.MariaDbStatement.executeQuery(org/mariadb/jdbc/MariaDbStatement.java:512)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:425)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:292)"]}
The jdbc_static loader, makes a hidden SQL query select count(*) from table limit 1 to get a checksum when downloading the table. This query contains " and mariaDB don't like that.
UNLESS you add 'ANSI_QUOTES' to the sql_mode
Batch command
SET GLOBAL sql_mode = 'ANSI_QUOTES'
Another option is to set the session to allow ansi_quotes
jdbc_connection_string => "jdbc:mariadb://xx.xx.xx/databasename?sessionVariables=sql_mode=ANSI_QUOTES"

How to cumul filters with logstash?

I'm currently discovering elastic search, kibana and logstash with docker. (Version 7.1.1) The three containers are running well.
I have some data files containing some lines like this one:
foo=bar type=alpha T=20180306174204527
My logstash.conf contains:
input {
file {
path => "/tmp/data/*.txt"
start_position => "beginning"
}
}
filter {
kv {
field_split => "\t"
value_split => "="
}
}
output {
elasticsearch { hosts => ["elasticsearch:9200"] }
stdout {
codec => rubydebug
}
}
I handle this data:
{
"host" => "07f3051a3bec",
"foo" => "bar",
"message" => "foo=bar\ttype=alpha\tT=20180306174204527",
"T" => "20180306174204527",
"#timestamp" => 2019-06-17T13:47:14.589Z,
"path" => "/tmp/data/ucL12018_03_06.txt",
"type" => "alpha"
"#version" => "1",
}
First step of job is done.
Now I want to add a filter to transform the value of the key T as a timestamp.
{
...
"T" => "2018-03-06T17:42:04.527Z",
"#timestamp" => 2019-06-17T13:47:14.589Z,
...
}
I do not know how to do it. I tried to add a second filter just after the kv filter, but nothing change when I add new files.
Add this filter after the kv filter:
date {
match => [ "T", "yyyyMMddHHmmssSSS" ]
target => "T"
}
The date filter will try to parse the field T using the provided pattern to create a date, which will be written to the T field (by default it overwrite the #timestamp field).

How to feed CSV to logstash with Dynamic Index mapping

Trying to feed logstash a csv for elastic indexing facing mapping error. The conf code is
using autodetect_column_names so I don't have to feed in the columns
name. Also I havent created any index or mapping for the data from dev console and expecting the logstash to create index and dynamic mapping at run time.
input {
file {
path => "/Users/amansingh/SELECT_______orca_OpID_as_op_id________t.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
autodetect_column_names => true
convert => {
"is_cancelled_2" => "boolean"
"is_cancelled_14" => "boolean"
"is_cancelled_7" => "boolean"
"is_cancelled_30" => "boolean"
"is_cancelled" => "boolean"
"is_dispute" => "boolean"
"is_return" => "boolean"
"is_large_parcel" => "boolean"
"is_managed" => "boolean"
}
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "bit_prices"
document_type => "doc"
}
stdout {}
}
error:
[2018-07-27T10:05:25,172][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"bit_prices", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x5568f8e6>], :response=>{"index"=>{"_index"=>"bit_prices", "_type"=>"doc", "_id"=>"C1wO3GQByymnO3qY9KTy", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [0] of different type, current_type [text], merged_type [ObjectMapper]"}}}}
The csv file looks like
op_id,is_cancelled_2,is_cancelled_14,is_cancelled_7,is_cancelled_30,revenue_adjustment_2,revenue_adjustment_7,revenue_adjustment_14,revenue_adjustment_30,cost_adjustment_2,cost_adjustment_7,cost_adjustment_14,cost_adjustment_30,order_date,is_cancelled,update_date,is_dispute,orcompletionstatus,is_return,is_large_parcel,is_managed
1627151503,0,0,0,0,0.0000,0.0000,0.0000,0.0000,0.0000,17.5100,17.5100,17.5100,2018-02-10 13:19:19.000,0,2018-02-14 02:00:41.003,0,3,0,0,0
1627151503,0,0,0,0,0.0000,0.0000,0.0000,0.0000,0.0000,17.5100,17.5100,17.5100,2018-02-10 13:19:19.000,0,2018-02-14 02:00:41.003,0,3,0,0,0

Data missed in Logstash?

Data missed a lot in logstash version 5.0,
is it a serous bug ,when a config the config file so many times ,it useless,data lost happen again and agin, how to use logstash to collect log event property ?
any reply will thankness
Logstash is all about reading logs from specific location and based on you interested information you can create index in elastic search or other output also possible.
Example of logstash conf
input {
file {
# PLEASE SET APPROPRIATE PATH WHERE LOG FILE AVAILABLE
#type => "java"
type => "json-log"
path => "d:/vox/logs/logs/vox.json"
start_position => "beginning"
codec => json
}
}
filter {
if [type] == "json-log" {
grok {
match => { "message" => "UserName:%{JAVALOGMESSAGE:UserName} -DL_JobID:%{JAVALOGMESSAGE:DL_JobID} -DL_EntityID:%{JAVALOGMESSAGE:DL_EntityID} -BatchesPerJob:%{JAVALOGMESSAGE:BatchesPerJob} -RecordsInInputFile:%{JAVALOGMESSAGE:RecordsInInputFile} -TimeTakenToProcess:%{JAVALOGMESSAGE:TimeTakenToProcess} -DocsUpdatedInSOLR:%{JAVALOGMESSAGE:DocsUpdatedInSOLR} -Failed:%{JAVALOGMESSAGE:Failed} -RecordsSavedInDSE:%{JAVALOGMESSAGE:RecordsSavedInDSE} -FileLoadStartTime:%{JAVALOGMESSAGE:FileLoadStartTime} -FileLoadEndTime:%{JAVALOGMESSAGE:FileLoadEndTime}" }
add_field => ["STATS_TYPE", "FILE_LOADED"]
}
}
}
filter {
mutate {
# here converting data type
convert => { "FileLoadStartTime" => "integer" }
convert => { "RecordsInInputFile" => "integer" }
}
}
output {
elasticsearch {
# PLEASE CONFIGURE ES IP AND PORT WHERE LOG DOCs HAS TO PUSH
document_type => "json-log"
hosts => ["localhost:9200"]
# action => "index"
# host => "localhost"
index => "locallogstashdx_new"
# workers => 1
}
stdout { codec => rubydebug }
#stdout { debug => true }
}
To know more you can go throw many available websites like
https://www.elastic.co/guide/en/logstash/current/first-event.html

logstash configuration grok parse timestamp

I am trying to parse
[7/1/05 13:41:00:516 PDT]
This is the configuration grok I have written for the same :
\[%{DD/MM/YY HH:MM:SS:S Z}\]
With the date filter :
input {
file {
path => "logstash-5.0.0/bin/sta.log"
start_position => "beginning"
}
}
filter {
grok {
match =>" \[%{DATA:timestamp}\] "
}
date {
match => ["timestamp","DD/MM/YY HH:MM:SS:S ZZZ"]
}
}
output {
stdout{codec => "json"}
}
above is the configuration I have used.
And consider this as my sta.log file content:
[7/1/05 13:41:00:516 PDT]
Getting this error :
[2017-01-31T12:37:47,444][ERROR][logstash.agent ] fetched an invalid config {:config=>"input {\nfile {\npath => \"logstash-5.0.0/bin/sta.log\"\nstart_position => \"beginning\"\n}\n}\nfilter {\ngrok {\nmatch =>\"\\[%{DATA:timestamp}\\]\"\n}\ndate {\nmatch => [\"timestamp\"=>\"DD/MM/YY HH:MM:SS:S ZZZ\"]\n}\n}\noutput {\nstdout{codec => \"json\"}\n}\n\n", :reason=>"Expected one of #, {, ,, ] at line 12, column 22 (byte 184) after filter {\ngrok {\nmatch =>\"\\[%{DATA:timestamp}\\]\"\n}\ndate {\nmatch => [\"timestamp\""}
Can anyone help here?
You forgot to specify the input for your grokfilter. A correct configuration would look like this:
input {
file {
path => "logstash-5.0.0/bin/sta.log"
start_position => "beginning"
}
}
filter {
grok {
match => {"message" => "\[%{DATA:timestamp} PDT\]"}
}
date {
match => ["timestamp","dd/MM/yy HH:mm:ss:SSS"]
}
}
output {
stdout{codec => "json"}
}
For further reference check out the grok documentation here.

Resources