I am starting the HiveThriftServer in Spark application by using :
HiveThriftServer2 .startWithContext(session.sqlContext());
I can see that i need to include hive-jdbc-1.2.1.spark2, hive-exec-1.2.1.spark2, hive-metastore-1.2.1.spark2 jars in the classpath to start it.
So far so good, it starts and i can see the "JDBC/ODBC" tab in Spark UI.
Now, in the client side (where i need to connect to this server to access the data), i have more advanced version of JARS like hive-jdbc-2.1.1 etc.
When i try to connect to the server with below code, i get an exception:
try {
Class.forName("org.apache.hive.jdbc.HiveDriver");
} catch (ClassNotFoundException e) {
System.out.println("Driver not found");
}
Connection con = DriverManager.getConnection("jdbc:hive2://<server-name>:10015/default", "", "");
Exception: Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default})
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:168)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:155)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:576)
And if i use the same version of JARS as server, its working.
Now i have two options:
1. Change JARS on server side to newer versions (2.1.1) , but then the server doesn't start(actually it complains for ClassNotFound issues). The HiveThriftServer2 class comes from spark-hive-thriftserver jar and if i look at its pom.xml, it has dependency of 1.2.1 jars , and that makes it obvious that server won't start on 2.1.1 versions
I change the version on client side,
But i don't have the option of changing the JAR version, as other applications in the App server are dependent on those versions.
Can anyone suggest any possible way to fix this? (Ideally the newer versions of jas should have backward compatibility)
Related
I have a java ee 8 web app using deltaspike using core, data and jsf modules.
I also added cdi bean (RegisterService using #trasactional) which calls a UserRepository
public void createNewUser(User newUser){
try{
userRepository.save(newUser)
}catch(PersistenceException ex){
throw new RegisterException("Error", ex); //it never falls at this point
}
}
When service layer calls repository I’m catching a PersistenceException and rethrowing a service layer exception but the repository never throws PersistenceException even if the primary ID is duplicated (I just can see the stacktrace in console output), of course I’m running into a contraint exception to try to execute the expected flow
I’m using a deltaspike exception handler but I have not added a #handler for PersistenceException
Does somebody knows what could be happening here?
this is a reproducible example: https://github.com/gdiazs/javaee8-fullstack
Since there is no tables in DB it will throw an PersitenceException.
I ran this example in 2 laptops.
Mine: OSx 10.14.6
JDK: Open JDK Zulu 1.8
eclipse and also maven terminal execution
Work: OSx 10.14.6 (it works as I expect good here)
JDK: Oracle 1.8
Intellij and also maven terminal execution
So now I`m really confused, no idea why is working just in one.
I'm trying to connect to Cassandra from Java code using JDBC connection. Here are the jars I'm using
Now this is the code which I found in the Stackoverflow to do this:
String serverIP = "localhost";
String keyspace = "mykeyspace";
Cluster cluster = Cluster.builder()
.addContactPoints(serverIP)
.build();
Session session = cluster.connect(keyspace);
String cqlStatement = "SELECT * FROM users";
for (Row row : session.execute(cqlStatement)) {
System.out.println(row.toString());
}
But unfortunately it's throwing following exception:
log4j:WARN No appenders could be found for logger (com.datastax.driver.core.Cluster).
log4j:WARN Please initialize the log4j system properly.
Exception in thread "main" java.lang.NoSuchMethodError: org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.<init>(IIIIIZ)V
at com.datastax.driver.core.Frame$Decoder.<init>(Frame.java:130)
at com.datastax.driver.core.Connection$PipelineFactory.getPipeline(Connection.java:795)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:212)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:188)
at com.datastax.driver.core.Connection.<init>(Connection.java:93)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:432)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:216)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:171)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1104)
at com.datastax.driver.core.Cluster.init(Cluster.java:121)
at com.datastax.driver.core.Cluster.connect(Cluster.java:198)
at com.datastax.driver.core.Cluster.connect(Cluster.java:226)
at com.mabsisa.resources.Demo.main(Demo.java:28)
I search in the internet for this exception scenario. But not much information I found. Please help me in solving this issue as I need to fix this issue as early as possible...
I think the problem comes from the netty version you are using. You are using the version 2.3.0 of netty and in that version the class
org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder
does not have the constructor which the cassandra driver needs. In the maven repository the cassandra driver core has a depedency with the version 3.9.0.FINAL of netty:
http://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core/2.0.2
So, try to update your version of netty.
Make sure you don't have two version of netty lying in your final build .
I had the same problem where i had two version of netty 3.2.2 and 3.9.0 , latest datastax driver needs 3.9.0 .
Environment is Red Hat, Cassandra 2.1, Datastax Java driver 2.1.1.
I have developed custom authentication/authorization plugins for Cassandra, and they work beautifully when I try them with cqlsh - I can see my plugins being called, users are authenticated/authorized accordingly, etc. - bottom line, everything works exactly as expected.
Then I tried to test using the Datastax driver. I'm connecting to Cassandra with:
public class CassandraConnection {
private final Cluster cluster;
private final Session session;
public CassandraConnection(final String node, final int port) {
this.cluster = Cluster.builder()
.addContactPoint(node)
.withPort(port)
.withCredentials("someuser", "somepassword")
.build();
this.session = cluster.connect();
}
// Etc....
The call to cluster.connect() generates an exception:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: localhost/127.0.0.1:9042 (com.datastax.driver.core.TransportException: [localhost/127.0.0.1:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:196)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:80)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1145)
at com.datastax.driver.core.Cluster.init(Cluster.java:149)
at com.datastax.driver.core.Cluster.connect(Cluster.java:225)
at com.<company...packages...>.CassandraConnection.<init>(CassandraConnection.java:21)
Here is the puzzling part: although I can see my plugins being called when I test them using cqlsh, they are never accessed when I use the Datastax driver - I have added log messages in the beginning of each method, and they are never called. There are no errors in the logs indicating any sort of initialization problem, and I do see a message indicating that my plugins will be used.
That exact same client code works with no problem when:
I don't have my plugin running.
I use Cassadra's PasswordAuthenticator.
So, it looks like there is some problem with my plugins, but how can that be if 1) they work fine with cqlsh and 2) none of their methods are being called when the datastax driver is being used?
A couple of additional points - if I try to connect using Datastax's DevCenter, I see the same behavior as my client, with the exact same exception, so that rules out my (very simple) client code. I have also tried to:
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(10000);
before calling connect() as suggested in other posts, but that didn't help either - when I step through the client with the debugger, I see the error as soon as I call cluster.connect(), so it's not a time out issue either.
Any help is appreciated.
While connecting to Cassandra 1.2.1 using Data-stax Java driver version 1.0.2, I am getting the error:
Exception in thread "main" java.lang.IllegalArgumentException: populate_io_cache_on_flush is not a column defined in this metadata
at com.datastax.driver.core.ColumnDefinitions.getIdx(ColumnDefinitions.java:268)
at com.datastax.driver.core.Row.isNull(Row.java:84)
at com.datastax.driver.core.TableMetadata$Options.<init>(TableMetadata.java:440)
at com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
at com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:124)
at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:88)
at com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:265)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:220)
at below line:
cluster = Cluster.builder().addContactPoint("localhost").build();
I tried deleted folder \var\lib\cassandra and then restart the cassandra server too which means there is no previous data. The server starts without any error but I am still getting the above error when I am trying to connect to it.
Ohk. Just discovered that it went away when I use latest version of Cassandra(1.2.8). So it might be because of version incompatibility.
I need to upgrade a SQL Server Compact database from 3.5 to 4.0 version. I am using linq-to-sql
I tried some things that I found on stackoverflow, that did not help:
I tried Add 4.0 connection dialog (no error messages, bak file was created)
I tried upgrading in code: (no error messages)
System.Data.SqlServerCe.SqlCeEngine engine= new System.Data.SqlServerCe.SqlCeEngine("Data source = ...");
engine.Upgrade();
I checked for database corruption (system returned that there are no corruption problems)
System.Data.SqlServerCe.SqlCeEngine engine= new System.Data.SqlServerCe.SqlCeEngine("Data source = ...");
engine.Verify();
After these operations I wanted to recreate dbml file - I received error message
Incompatible Database Version (..) DB version 4000000, Requested version 3505053 (..)
In debug mode I checked db.Connection.ServerVersion = returns 3.5.8080.0
In database connection properties version is 4.0.8876.1
Any suggestions?
Once you have upgraded your database to 4.0, you can no longer create a dbml file, as the Tool responsible for this only Works with 3.5 databasee files. One possible workaround is to have two version of the database, one 3.5 for dbml generation, and another for use. Remember to initialize the DataContext object with a SqlCeConnection object, otherwise 4.0 will not Work with LINQ to SQL - or you can try my SQL Server Compact Toolbox, that allows you to generate a DataContext directly from a 4.0 database file (must still initialize with SqlCeConnection object)