I am using logstash to transfer data from postgresql to mysql.
There are 10 configure files in /etc/logstash/conf.d and I run logstash as a service by the command systemctl start logstash.
I have setup the retry configure connection_retry_attempts and connection_retry_attempts_wait_time (retry 10 times every 2 mins) as shown below:
input {
jdbc {
jdbc_driver_library => "/etc/logstash/lib/postgresql-42.2.14.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_paging_enabled => true
jdbc_page_size => "50000"
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 120
jdbc_connection_string => "jdbc:postgresql://xxx/xxx"
jdbc_user => "user"
jdbc_password => "xxx"
jdbc_default_timezone => "UTC"
statement => "select ... from ... where ... and login_time >= :sql_last_value "
tracking_column => "login_time"
tracking_column_type => "timestamp"
use_column_value => true
last_run_metadata_path => "/home/data/logs/..."
schedule => "* * * * *"
type => "xxx"
}
}
However, the setup did not work as expected. A connection error occured, 8/10 table tranfer stopped only 2 still worked. I restart the logstash service and fixed it. But according to the log, there's not event retry attemp after the connection failure.
How to make the jdbc input automatically retry connect? Thanks for your help in advance.
[2022-06-07T18:08:03,763][ERROR][logstash.inputs.jdbc ][main] Unable to connect to database. Trying again {:error_message=>"Java::OrgPostgresqlUtil::PSQLException: The connection attempt failed."}
[2022-06-07T18:10:03,914][INFO ][logstash.inputs.jdbc ][main] (0.015812s) SELECT CAST(current_setting('server_version_num') AS integer) AS v
[2022-06-07T18:10:03,940][INFO ][logstash.inputs.jdbc ][main] (0.024598s) SELECT count(*) AS "count" FROM (select * from xxx where added_time >= '2022-06-06 11:47:00' and added_time >= '2022-06-07 10:06:48.221000+0000' ) AS "t1" LIMIT 1
Related
I am using jdbc_static plugin with a simple select query
jdbc_static{
id => "JDBC_STATIC_APPLICATION_MIND_MAPPING"
loaders => [
{
id => "REMOTE_MAPPING"
query => "select field1, field2 FROM DB.view"
local_table => "LOCAL__MAPPING_COLUMNS"
}
]
...
jdbc_user => "USR"
jdbc_password => "PW"
jdbc_connection_string => "jdbc:teradata://SCH/database=DB"
jdbc_driver_class => "com.teradata.jdbc.TeraDriver"
...
I get the data from a Teradat DB, but the count query excuted by the package is causing me an issue,
The error:
[2022-09-12T11:45:36,171+02:00][ERROR][logstash.filters.jdbc.readonlydatabase] Exception occurred when executing loader Jdbc query count {:exception=>"Java::JavaSql::SQLException: [Teradata Database] [TeraJDBC 16.00.00.23] [Error 3706] [SQLState 42000] Syntax error: expected something between the word 'T1' and the 'LIMIT' keyword.", :backtrace=>["com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(com/teradata/jdbc/jdbc_4/util/ErrorFactory.java:309)", "com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(com/teradata/jdbc/jdbc_4/statemachine/ReceiveInitSubState.java:103)", "com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(com/teradata/jdbc/jdbc_4/statemachine/StatementReceiveState.java:311)", "com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(com/teradata/jdbc/jdbc_4/statemachine/StatementReceiveState.java:200)"
my logstash version is 6.5.4
Do you have a solution for that issue?
I have 16 conf files and all of them scheduled to run every day at 09:05 am. Today these files could not run at intended time. After i fix the problem tried to restart logstash but conf files are not able to generate indices.
Example dash_KPI_1.conf file:
input {
jdbc {
jdbc_driver_library => "/var/OJDBC-Full/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#a/b"
jdbc_user => "KIBANA"
jdbc_password => "pass"
statement => "
SELECT /*+ PARALLEL(16) */
* from
dual"
# jdbc_paging_enabled => "true"
# jdbc_page_size => "50000"
type => "dash_kpi_1"
schedule => "05 09 * * *"
}
}
output { if [type]=="dash_kpi_1"{
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200"]
index => "dash_kpi_1-%{+YYYY.ww}"
user => "elastic"
password => "pass2"
}
}
}
How i start and stop logstash:
systemctl stop logstash.service
systemctl start logstash.service -r
What i have tried:
/usr/share/logstash/bin/logstash -f dash_KPI_1.conf
How can i restart these 16 conf files and make them generate indices as intended in the first place ?
I see you are creating index weekly. If you want to create it daily, you need to change the index pattern to "dash_kpi_1-%{+YYYY.MM.dd}".
Can you help me to solve this problem
I'm using
elasticsearch-7.4.2
kibana-7.4.2
logstash-7.4.2
windows 10
Error: com.mysql.cj.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::PluginLoadingError
Stack: D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:190:in open_jdbc_connection' D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/plugin_mixins/jdbc/jdbc.rb:253:in execute_statement'
D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:309:in execute_query' D:/elasticsearch/logstash-7.4.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.16/lib/logstash/inputs/jdbc.rb:281:in run'
D:/elasticsearch/logstash-7.4.2/logstash-core/lib/logstash/java_pipeline.rb:314:in inputworker' D:/elasticsearch/logstash-7.4.2/logstash-core/lib/logstash/java_pipeline.rb:306:in block in start_input'
[2019-11-28T15:08:50,858][ERROR][logstash.javapipeline ][main] A plugin had an unrecoverable error. Will restart this plugin.
my conf
input{
jdbc{
jdbc_driver_library => "D:\elasticsearch\mysql-connector-java-8.0.18\mysql-connector-java-8.0.18.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/sakila"
jdbc_user => ""
jdbc_password => "**"
statement => "SELECT * FROM actor"
}
}
output{
elasticsearch{
hosts => "localhost:9200"
action => "index"
index => "actor"
document_type => 'text'
document_id => '%{id}'
}
}
Add the relevant mysql jar file to [logstash_folder]\logstash-core\lib\jars and provide only the jar name in the config file as follows
jdbc_driver_library => "mysql-connector-java-8.0.18.jar"
Currently trying to populate the employee index with the below settings:
CONF
input {
jdbc {
jdbc_driver_library => "~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://SERVER;user=USER;password=PASSWORD"
jdbc_user => "DB_USER"
jdbc_password => "DB_PASSWORD"
jdbc_validate_connection => true
jdbc_validation_timeout => -1
statement => "SELECT * FROM [dbo].Employee ORDER BY ID"
type => "employee"
}
}
filter {
}
output {
}
NOTE: filter and output sections of the conf file is purposely blank
LINUX COMMAND
sudo /usr/share/logstash/bin/logstash -f /home/ubuntu/Employee-pipeline.conf --path.settings /etc/logstash/ --path.data /var/lib/logstash_new
RESULT
Looks like logstash does not know or don't have access to ~/sqljdbc...*.jar
I also confirmed that the mssql-jdbc-6.2.1.jre8.jar exists
However, when I changed the path to /home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar, it runs successfully.
So ~/ is the same as /home/ubuntu
This started to occur after upgrading our Elastic Stack from v5.5 to v5.6. Also, note that this does not occur if we run the same conf file with the logstash service.
I try to access to MySQL service with logstash. I installed logstash-input-jdbc (/opt/logstash/bin/logstash-plugin install logstash-input-jdbc) and created /etc/logstash/conf.d/sample.conf:
input{
lumberjack{
...
}
jdbc{
type => "jdbc_hfc"
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
jdbc_user => "root"
jdbc_password => ""
jdbc_validate_connection => true
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM hfc"
schedule => "00 07 * * *"
}
file{
...
}
}
output{
if [type] == "jdbc_hfc"
{
elasticsearch{
protocl => http
hosts => ["localhost:9200"]
index => "logstash-jdbc-hfc-%{+YYYY.MM.dd}"
}
}
}
When I excute the configtest (/opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/sample.conf), I get the next error:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
The given configuration is invalid. Reason: Something is wrong with your configuration. {:level=>:fatal}
When I comment the jdbc_connection_string line, the configtest returns:
Configuration OK
But when I executed the sample.conf file, logstash retruns me the next error:
Pipeline aborted due to error {:exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/plugin_mixins/jdbc.rb:159:in `prepare_jdbc_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/inputs/jdbc.rb:187:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:330:in `start_inputs'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:329:in `start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:180:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
Where is my mistake? What can I do to resolve this problem?
Thanks a lot and best regards.
PD: If you need more information please, ask me.
The first error says it all:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
So you just have a typo in your configuration:
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
should read
jdbc_driver_library => "mysql-connector-java-5.1.40-bin.jar"
^
|