I'm trying to deploy an jdbc-sink to a Helm based Kuberentes install of Spring Cloud Data Flow.
How would I go about adding JDBC jars in order to make use of the starters? I'm getting the following error when trying to deploy the app (in this case MySQL):
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under '' to com.zaxxer.hikari.HikariDataSource:
Property: driverclassname
Value: com.mysql.cj.jdbc.Driver
Origin: "driverClassName" from property source "source"
Reason: Failed to load driver class com.mysql.cj.jdbc.Driver in either of HikariConfig class loader or Thread context classloader
Would I need to extend the existing starter, and manually add the Driver as there's no way of guaranteeing which driver it should be using?
Thanks!
We ship the OSS license-friendly drivers for few databases in SCDF and the app-starters that require database access, including the jdbc applications.
For proprietary drivers, there's a procedure to patch the out-of-the-box app-starters that we maintain and ship — more details in the reference guide here.
Once when you bundle the relevant driver in the classpath, you'd produce a docker image to then use it in SCDF.
Related
I have created the custom trigger.jar and placed in the directory where Cassandra service can read the same, after performing the "nodetool realoadtriggers" it just prints "Loading new /trigger.jar" in system.log, but it's not executing the java class in the jar.
Note: Also, I have created the trigger on the table which should trigger this custom trigger jar on writing the inserting the data to the table, but it's not loading the class
The java class implements ITrigger overrides argument method. Any pointers for debugging the same will be helpful.
Cassandra verison - DSE 6.7.5
From the DSE 6.0 upgrade guide:
The org.apache.cassandra.triggers.ITrigger interface was modified from augment to augmentNonBlocking for non-blocking internal architecture. Updated trigger implementations must be provided on upgraded nodes.
So you need to change your implementation to comply.
I would like to use prestosql/presto container for automated tests. For this purpose I want to receive the ability to programmatically to create catalog/schema/table. Unfortunately, I didn't find the option via docker environment variables. If I trying to do it via jdbc connector, I receive following error:"This connector does not support creating tables"
How can I create schemas or tables using prestosql/presto container?
If you are writing tests in Java (as suggested by JDBC tag), you can use testcontainers library. It comes with Presto module.
uses prestosql/presto container under the hood
comes with Presto memory connector pre-installed, so you can create schemas & tables there
I am using presto version 179 and I need to manually create a database.properties file in /etc/presto/catalog through the CLI.
Can I do the same process from the GUI of presto?
Presto's built-in web interface does not provide any configuration capabilities.
Usually, such things are handled as part of deployment/configuration management on a cluster. Thus, configuration is provided by some external means just as is Presto installation.
I have an application that connects to Cassandra using the Java Driver, fetches some configuration and based on the results generates and executes some PIG scripts.
Now, I am able to successfully connect to Cassandra, when jars required for PIG are not in the classpath. Similarly, I am able to launch PigServer class and execute scripts / statements using the entire DSE stack when I am not connecting to Cassandra using the java driver to retrieve the configuration.
When I use both of them I get following exception:
org.jboss.netty.channel.ChannelPipelineException: Failed to initialize a pipeline.
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:181)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:570)
... 35 more
Caused by: org.jboss.netty.channel.ChannelPipelineException: Failed to initialize a pipeline.
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:208)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at com.datastax.driver.core.Connection.<init>(Connection.java:100)
at com.datastax.driver.core.Connection.<init>(Connection.java:51)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:376)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:207)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:170)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:87)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:576)
at com.datastax.driver.core.Cluster$Manager.access$100(Cluster.java:520)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:67)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:94)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:501)
I see others have seen similar exception, but when trying to execute Cassandra statements, from MapReduce tasks, which is not my case:
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/FhW_8e4FyAI
http://www.datastax.com/dev/blog/the-native-cql-java-driver-goes-ga#comment-297187
Thanks!
DSE stacks connect to Cassandra through thrift API which is different from Cassandra Java Driver.
You can't use Cassandra Java driver for Pig/Hadoop before CASSANDRA-6311 is resolved.
There may be the bad security certificate/security certificate expiration issue if you are using certificate.
I am connecting from Wso2DSS to CassandraDB, i added the (apache-cassandra-cql-1.0.3,cassandra-all-0.8.0-beta2) jar files, still I am getting the following error.
java.sql.SQLException: org.apache.cassandra.cql.jdbc.CassandraDriver.
How can I solve this error?
If you are using the latest versions of DSS (> v.3.0.0), the Cassandra JDBC driver which is used to connect to Cassandra via JDBC, is by default shipped with DSS. Therefore, it's just a matter of configuring your data source in DSS (as a carbon datasource or an inline datasource in the data service descriptor file) with the driverClassName "org.apache.cassandra.cql.jdbc.CassandraDriver" and other relevant parameters like JDBC URL, username, password, etc, and pointing to it within the data service descriptor. (.dbs file)
However, if you're using any other WSO2 product such as ESB or an older version of DSS, then you will have to download the cassandra JDBC driver and the other dependency jars (if any) to CARBON_HOME/repository/components/lib, restart the server and then configure your datasources pointing to Cassandra.
Hope this helps.
Regards,
Prabath