While creating db using cassandra cli, I am getting the following error. any idea what I am doing wrong here ? More details are listed below:
Command - create keyspace pcpro;
output - org.apache.thrift.transport.TTransportException
When I execute the same command again, exception changes to
output - org.apache.thrift.transport.TTransportException:
java.net.SocketException: Broken pipe
FYI, I am using cassandra 2.0.1
Thank you.
If you restart cassandra, you have to also restart the CLI to reset the connection it is using to communicate with cassandra otherwise you get the broken pipe error or connection reset error:
// start cassandra
[default#unknown] create keyspace pcpro;
5d344e5d-635e-3745-a1a6-d82ef68bdf28
// reset cassandra
[default#unknown] create keyspace pcpro2;
org.apache.thrift.transport.TTransportException:
java.net.SocketException: Connection reset
// try the query a second time
[default#unknown] create keyspace pcpro2;
org.apache.thrift.transport.TTransportException:
java.net.SocketException: Broken pipe
Related
We are trying to connect to two keyspaces of Cassandra (3.x) in the same application with the same Kerberos credentials. The application is able to connect to one keyspace but no the other. Access to the keyspaces has been verified.
Error on connection:
2022-08-22 13:15:10,972 [cluster-reconnection-0] DEBUG c.d.d.c.ControlConnection [--]- [Control connection] error on 169.24.167.109:9042 connection, trying next host
javax.security.auth.login.LoginException: No LoginModules configured for CassandraJavaClient
at javax.security.auth.login.LoginContext.init(LoginContext.java:264)
at javax.security.auth.login.LoginContext.<init>(LoginContext.java:417)
The ticket cache is :
CassandraJavaClient {
com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true ticketCache="/var//krb5cc_userlogin";
};
The same ticket cache file is used by the first connection - which succeeds. While the second connection fails. I am not even sure as to how to debug it (tried remote debugging and since the initial control connection is an Async call, unable to get to the actual error).
We are using com.datastax.cassandra:cassandra-driver-core:jar:3.6.0
Any ideas/help to debug / resolve this will be highly appreciated
While performing a concurrent bulk load operation, I received this error. Subsequently, all my queries failed, and I kept getting the same error .
The exception I got is as follows:
java.lang.NullPointerException: Could not find type for id: 52237 at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:250) at org.janusgraph.graphdb.types.vertices.JanusGraphSchemaVertex.name(JanusGraphSchemaVertex.java:57) at org.janusgraph.graphdb.vertices.AbstractVertex.label(AbstractVertex.java:121) at org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceElement.(ReferenceElement.java:57) at org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceVertex.(ReferenceVertex.java:46) at org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceFactory.detach(ReferenceFactory.java:48) at org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceFactory.detach(ReferenceFactory.java:69) at org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceFactory.detach(ReferenceFactory.java:80) at org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.HaltedTraverserStrategy.halt(HaltedTraverserStrategy.java:60) at org.apache.tinkerpop.gremlin.server.util.TraverserIterator.next(TraverserIterator.java:64) at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.handleIterator(TraversalOpProcessor.java:529) at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.lambda$iterateBytecodeTraversal$4(TraversalOpProcessor.java:382) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Some additional context :
storage.batch-loading was NOT enabled
The bulk write operation I was running was highly concurrent and with high load
I used about 100 instances of gremlin server connecting to Cassandra/ES backend
I did not explicitly define a schema
Would be great if someone could give me an idea about what could have caused this .
Thanks !
it happens if multiple instance of gremlin-server are running
it is because gremlin server was not shutdown or killed properly.
it can be because the vm on which gremlin-server is running might have restarted.
so the solution is login to gremlin-console and run your commands based on your backend.in my case it's cassandra and elasticsearch
so i will run
method 1
:remote connect tinkerpop.server conf/remote.yaml session
:remote console session
or
graph=JanusGraphFactory.open('conf/janusgraph-cql-es.properties');
g=graph.traversal()
and if you are running containers then your command must be similar to this
graph=JanusGraphFactory.open('/etc/opt/janusgraph/janusgraph.properties');
g=graph.traversal()
now after running those you can run
mgmt = graph.openManagement()
mgmt.getOpenInstances()
it will display all the instances
eg
ac12000231-a9ffbcbb0e921
ac12000230-a9ffbcbb0e921(current)
except that current instance close other instances
mgmt.forceCloseInstance('ac12000231-a9ffbcbb0e921')
after closing all the instances commit the changes
mgmt.commit()
now restart your gremlin server and run your query it should work
method 2
if the problem persists just kill your gremlin-server and start it again few times...it should work
load command should work
another reason why this happens is if the data is not restored properly..
if you are using cluster take the backup on all the nodes
then restore on your destination node or nodes
i used nodetool for backup and sstableloader for restoring data
I configured Hive with mySQL as my metastore. I can enter hive shell and create tables successfully.
Spark version: 2.4.0
Hive version: 3.1.1
When I try to run a SparkSQL program using spark submit, I'm getting the below error.
2019-03-02 15:43:41 WARN HiveMetaStore:622 - Retrying creating default database after error: Error creating transactional connection factory
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
......
......
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
......
......
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "HikariCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "HikariCP" was not found in the CLASSPATH!
Please let me know if anyone can help me in this regard.
I don't know if you have already solved this problem. There is my advice.
the default database connection is HikariCP in the hive-site.xml. You can search for this in the hive-site.xml: datanucleus.connectionPoolingType. The value is HikariCP. So you need to change it to dbcp since you use Mysql as your metastore.
And at last, don't forget about adding the mysql-connector-java-5.x.x.jar to the path like
/home/hadoop/spark-2.3.0-bin-hadoop2.7/jars
I am not able to connect to Cassandra cluster using this code:
public static boolean tableCreate() {
// Query
String query = "CREATE KEYSPACE store WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};";
// creating Cluster object
Cluster cluster = Cluster.builder().addContactPoint("127.0.0.1").withPort(9042).build();
// Creating Session object
Session session = cluster.connect("tutorialspoint");
// Executing the query
session.execute(query);
// using the KeySpaceq
session.execute("USE store");
System.out.println("Keyspace created with store name");
return true;
}
It is giving me this error:
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1 (null))
What is my mistake in the code above?
Cassandra is running on my Local Windows 10 64bit and I also disabled the firewall.
You may need to check and possibly update the version of datastax driver that you are using. I faced exactly same error (ie same error message while connecting) and after upgrading driver 'datastax' version the problem went away and I could connect to DB.
Similar Issue: Unable to connect to Cassandra cluster running on local host
In our dev cluster, which has been running smooth before, when we replace a node (which we have been doing constantly) the following failure occurs and prevents the replacement node from joining.
cassandra version is 2.0.7
What can be done about it?
ERROR [STREAM-IN-/10.128.---.---] 2014-11-19 12:35:58,007 StreamSession.java (line 420) [Stream #9cad81f0-6fe8-11e4-b575-4b49634010a9] Streaming error occurred
java.lang.AssertionError: Unknown keyspace system_traces
at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:260)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88)
at org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:239)
at org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:436)
at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:368)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289)
at java.lang.Thread.run(Thread.java:745)
I got the same error while I was trying to setup my cluster, and as I was experimenting with different switches in cassandra.yaml, I restarted the service multiple times and removed the system dir under data directory (/var/lib/cassandra/data as mentioned here).
I guess for some reason cassandra tries to load system_traces keyspace and fails (the other dir under /var/lib/cassandra/data), and nodetool throws this error. You can just remove both system and system_traces before starting cassandra service, or even better delete all content of bommitlog, data and savedcache there.
This works obviously if you dont have any data just yet in the system.