Logstash JDBC class cannot found - logstash

Can you help me to solve this problem
I'm using
elasticsearch-7.4.2
kibana-7.4.2
logstash-7.4.2
windows 10
Error: com.mysql.cj.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::PluginLoadingError
Stack: D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:190:in open_jdbc_connection' D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:253:in execute_statement'
D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:309:in execute_query' D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:281:in run'
D:/elasticsearch/logstash-7.4.2/logstash-core/lib/logstash/java_pipeline.rb:314:in inputworker' D:/elasticsearch/logstash-7.4.2/logstash-core/lib/logstash/java_pipeline.rb:306:in block in start_input'
[2019-11-28T15:08:50,858][ERROR][logstash.javapipeline ][main] A plugin had an unrecoverable error. Will restart this plugin.
my conf
input{
jdbc{
jdbc_driver_library => "D:\elasticsearch\mysql-connector-java-8.0.18\mysql-connector-java-8.0.18.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/sakila"
jdbc_user => ""
jdbc_password => "**"
statement => "SELECT * FROM actor"
}
}
output{
elasticsearch{
hosts => "localhost:9200"
action => "index"
index => "actor"
document_type => 'text'
document_id => '%{id}'
}
}

Add the relevant mysql jar file to [logstash_folder]\logstash-core\lib\jars and provide only the jar name in the config file as follows
jdbc_driver_library => "mysql-connector-java-8.0.18.jar"

Related

LIMIT keyword is unknown by Teradata Database

I am using jdbc_static plugin with a simple select query
jdbc_static{
id => "JDBC_STATIC_APPLICATION_MIND_MAPPING"
loaders => [
{
id => "REMOTE_MAPPING"
query => "select field1, field2 FROM DB.view"
local_table => "LOCAL__MAPPING_COLUMNS"
}
]
...
jdbc_user => "USR"
jdbc_password => "PW"
jdbc_connection_string => "jdbc:teradata://SCH/database=DB"
jdbc_driver_class => "com.teradata.jdbc.TeraDriver"
...
I get the data from a Teradat DB, but the count query excuted by the package is causing me an issue,
The error:
[2022-09-12T11:45:36,171+02:00][ERROR][logstash.filters.jdbc.readonlydatabase] Exception occurred when executing loader Jdbc query count {:exception=>"Java::JavaSql::SQLException: [Teradata Database] [TeraJDBC 16.00.00.23] [Error 3706] [SQLState 42000] Syntax error: expected something between the word 'T1' and the 'LIMIT' keyword.", :backtrace=>["com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(com/teradata/jdbc/jdbc_4/util/ErrorFactory.java:309)", "com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(com/teradata/jdbc/jdbc_4/statemachine/ReceiveInitSubState.java:103)", "com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(com/teradata/jdbc/jdbc_4/statemachine/StatementReceiveState.java:311)", "com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(com/teradata/jdbc/jdbc_4/statemachine/StatementReceiveState.java:200)"
my logstash version is 6.5.4
Do you have a solution for that issue?

How to restart multiple conf files in logstash

I have 16 conf files and all of them scheduled to run every day at 09:05 am. Today these files could not run at intended time. After i fix the problem tried to restart logstash but conf files are not able to generate indices.
Example dash_KPI_1.conf file:
input {
jdbc {
jdbc_driver_library => "/var/OJDBC-Full/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#a/b"
jdbc_user => "KIBANA"
jdbc_password => "pass"
statement => "
SELECT /*+ PARALLEL(16) */
* from
dual"
# jdbc_paging_enabled => "true"
# jdbc_page_size => "50000"
type => "dash_kpi_1"
schedule => "05 09 * * *"
}
}
output { if [type]=="dash_kpi_1"{
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200"]
index => "dash_kpi_1-%{+YYYY.ww}"
user => "elastic"
password => "pass2"
}
}
}
How i start and stop logstash:
systemctl stop logstash.service
systemctl start logstash.service -r
What i have tried:
/usr/share/logstash/bin/logstash -f dash_KPI_1.conf
How can i restart these 16 conf files and make them generate indices as intended in the first place ?
I see you are creating index weekly. If you want to create it daily, you need to change the index pattern to "dash_kpi_1-%{+YYYY.MM.dd}".

MSSQL JDBC Driver library path is not recognized when using ~/ when running Logstash manually

Currently trying to populate the employee index with the below settings:
CONF
input {
jdbc {
jdbc_driver_library => "~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://SERVER;user=USER;password=PASSWORD"
jdbc_user => "DB_USER"
jdbc_password => "DB_PASSWORD"
jdbc_validate_connection => true
jdbc_validation_timeout => -1
statement => "SELECT * FROM [dbo].Employee ORDER BY ID"
type => "employee"
}
}
filter {
}
output {
}
NOTE: filter and output sections of the conf file is purposely blank
LINUX COMMAND
sudo /usr/share/logstash/bin/logstash -f /home/ubuntu/Employee-pipeline.conf --path.settings /etc/logstash/ --path.data /var/lib/logstash_new
RESULT
Looks like logstash does not know or don't have access to ~/sqljdbc...*.jar
I also confirmed that the mssql-jdbc-6.2.1.jre8.jar exists
However, when I changed the path to /home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar, it runs successfully.
So ~/ is the same as /home/ubuntu
This started to occur after upgrading our Elastic Stack from v5.5 to v5.6. Also, note that this does not occur if we run the same conf file with the logstash service.

logstash-input-jdbc: “Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}”

I try to access to MySQL service with logstash. I installed logstash-input-jdbc (/opt/logstash/bin/logstash-plugin install logstash-input-jdbc) and created /etc/logstash/conf.d/sample.conf:
input{
lumberjack{
...
}
jdbc{
type => "jdbc_hfc"
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
jdbc_user => "root"
jdbc_password => ""
jdbc_validate_connection => true
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM hfc"
schedule => "00 07 * * *"
}
file{
...
}
}
output{
if [type] == "jdbc_hfc"
{
elasticsearch{
protocl => http
hosts => ["localhost:9200"]
index => "logstash-jdbc-hfc-%{+YYYY.MM.dd}"
}
}
}
When I excute the configtest (/opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/sample.conf), I get the next error:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
The given configuration is invalid. Reason: Something is wrong with your configuration. {:level=>:fatal}
When I comment the jdbc_connection_string line, the configtest returns:
Configuration OK
But when I executed the sample.conf file, logstash retruns me the next error:
Pipeline aborted due to error {:exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/plugin_mixins/jdbc.rb:159:in `prepare_jdbc_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/inputs/jdbc.rb:187:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:330:in `start_inputs'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:329:in `start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:180:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
Where is my mistake? What can I do to resolve this problem?
Thanks a lot and best regards.
PD: If you need more information please, ask me.
The first error says it all:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
So you just have a typo in your configuration:
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
should read
jdbc_driver_library => "mysql-connector-java-5.1.40-bin.jar"
^
|

How to see the requests sent by LogStash to the elasticsearch output in Fiddler?

I have LS_JAVA_OPTS = -DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888
And yet, I see no traffic to my elasticsearch node from logstash in Fiddler.
I know my elasticsearch is up and running. When I curl it, Fiddler clearly shows the requests, so it is something about jruby that does not route requests through Fiddler.
I am not calling jruby directly. Rather I use the bin\logstash.bat script.
Appendix
My conf file:
input {
file {
path => 'c:/log/bje-Error.log'
sincedb_path => "NUL"
codec => plain {
charset => "ISO-8859-1"
}
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => previous
}
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{BASE10NUM:thread:int}] %{WORD:machine}:%{WORD:service} \[%{BASE10NUM:localId:int}?:%{UUID:logId}?:(?<jobKind>[^:]+)?:%{BASE10NUM:jobDefinitionId:int}? %{WORD:namespace}?:%{WORD:job}?:(?<customCtx>[^\]]*)\] %{LOGLEVEL:level} %{NOTSPACE:logger} - (?<text>(?m:.*))" }
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
document_type => 'logs_bje'
hosts => ["ncesearch01"]
}
}
Testing in powershell:
PS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24666
PS E:\logstash-2.3.2\bin> .\logstash.bat -f C:\dayforce\DayforceDEV\elk\logstach.conf
LS_JAVA_OPTS was set to [-DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888]. This will be appended to the JAVA_OPTS [ -XX:HeapDumpPath="$LS_HOME/heapdump.hprof"]
io/console not supported; tty will not be manipulated
Settings: Default pipeline workers: 12
Pipeline main started
{
"message" => "2016-05-02 16:00:05.7079 [111] CANWS212:MyBJE [2251:e2737eeb-40d6-4b0e-9608-75ee3de894d3:ScheduledInstance:16 DFUnitTest:BillingDataCollectionJob:] ERROR
SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob - The client database version is not defined in DFDatabaseIdentification \r",
"#version" => "1",
"#timestamp" => "2016-05-03T03:40:50.531Z",
"path" => "c:/log/bje-Error.log",
"host" => "CANWS212",
"timestamp" => "2016-05-02 16:00:05.7079",
"thread" => 111,
"machine" => "CANWS212",
"service" => "MyBJE",
"localId" => 2251,
"logId" => "e2737eeb-40d6-4b0e-9608-75ee3de894d3",
"jobKind" => "ScheduledInstance",
"jobDefinitionId" => 16,
"namespace" => "DFUnitTest",
"job" => "BillingDataCollectionJob",
"level" => "ERROR",
"logger" => "SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob",
"text" => "The client database version is not defined in DFDatabaseIdentification \r"
}
^CTerminate batch job (Y/N)? ←[33mSIGINT received. Shutting down the agent. {:level=>:warn}←[0m
stopping pipeline {:id=>"main"}
Pipeline main has been shutdown
The signal HUP is in use by the JVM and will not work correctly on this platform
^CPS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24667
PS E:\logstash-2.3.2\bin>
As you can see, http://ncesearch01:9200/logstash-*/_count returns incremented count, hence running logstash did send a request to the elasticsearch. However, it bypassed Fiddler, despite the LS_JAVA_OPTS.
I find some possible reasons for this condition,although I did not try.May this answer should be called "discussion",I`m sorry.
1.You may need a linux OS instead of windows,for the reason,
I am not sure this question has been deal in the latest logstash version
you may be interested in this,Make JAVA_OPTS and LS_JAVA_OPTS work consistently on Windows
2.As we see,the most possible is that
logstash ES_output plugin use the http way to send message
after logstash-2.0,you may use the old version?
moreInfo about ES_output_plugin,logstash-output-plugin-elasticsearch
If anyone has any ideas,your share will be expected~

Resources