I have a simple single node cassandra setup (1.1.0) (default settings). Whenever I try to create a keyspace in cassandra-cli, I get the error:
[default#unknown] create keyspace tax;
org.apache.thrift.transport.TTransportException
In cassandra server log, the exception stacktrace:
ERROR 12:15:04,722 Exception in thread Thread[MigrationStage:1,5,main]
java.lang.AssertionError
at org.apache.cassandra.db.DefsTable.updateKeyspace(DefsTable.java:441)
at org.apache.cassandra.db.DefsTable.mergeKeyspaces(DefsTable.java:339)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:269)
at org.apache.cassandra.service.MigrationManager$1.call(MigrationManager.java:214)
I tried deleting the contents in ./var/lib/cassandra/data and restarting the server and my mac, but still ending up with same issue.
Looks like the system keyspace was corrupted. Removing the data files from
/var/lib/cassandra/data
/var/lib/cassandra/commitlog
/var/lib/cassandra/saved_caches
and restarting the cassandra server fixed the issue. (The above directories are defined in $CASSANDRA_HOME/conf/cassandra.yaml)
Following is the flow while adding the keyspace to Cassandra.(As per comments in Cassandrda source code. Correct me if I am getting it wrong)
1) At first step it check if any new keyspaces were added.
2) At second step we check if there were any keyspaces re-created, in this context
re-created means that they were previously deleted but still exist in the low-level schema as empty keys
3) At final step we updating modified keyspaces and saving keyspaces drop them later.
While modifying Keyspace it calls to function "updateKeyspace" and here it seems if the keyspace metadata is corrupt it throws assertion error.
SO in your case it might be that you have deleted the same Keyspace and trying to recreate which was causing this issue or as you mentioned It was a Metadata corruption.
Related
I have a Spark + Hive application.
It works fine. But at some point I had to create another Hive environment.
So I ran show create table ... and recreated the same view (with underlying tables). And added some data.
I can query the data from hive cli, etc.
but whenever I run my application it fails with
ERROR Failed to execute 'table' on 'org.apache.spark.sql.SparkSession' with args=([Type=java.lang.String, Value: <view name>])
I believe it refers to the line code when I can sparkSession.table(<view-name>)
What steps can be executed to troubleshoot a such issue?
UPD
Session declaration (definitely tried to create a session without this configuration)
.Config("spark.hadoop.google.cloud.auth.service.account.enable", "true")
.Config("spark.hadoop.google.cloud.auth.service.account.json.keyfile", "some.file")
.Config("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.Config("spark.sql.debug.maxToStringFields", int64 2048)
.Config("spark.debug.maxToStringFields", int64 2048)
Maybe a bit trivial, but when it comes to troubleshooting this kind of an issue, really try and get to the root of the problem with a minimal set up:
I generally start off by starting the spark-shell.
Check whether it is possible to run spark.sql("SHOW DATABASES").show(20, false). If this fails, it's probably something with your Hive configuration, indeed.
Try and see whether you can run spark.table("your_table"). If not, it'll probably give you a clearer error (such as Table or view not found: ...).
If all of the above works, try to strip your application such that it only does that spark.table, which did work in your spark-shell at that point in time. If that suddenly doesn't work, it might have to do with how the SparkSession is created in your application.
If that works, try and uncomment the code piece by piece, until you're back to your original code to better pinpoint where it fails.
using sstableloader load a new table from a keyspace snapshot on a different cluster, have an error
Steps to recreate:
create this table
cp snapshot files to a temp directory temp_dir.
sstableloader load ( error out )
Anybody know what the problem is? How can I fix it? Thank you.
Detail like :
sstableloader --nodes vm_cdb01 -u dba -p xxx /xxx/temp_dir/snapshot_directory
WARN 21:21:42,124 Small cdc volume detected at /cdc_raw; setting cdc_total_space_in_mb to 1773. You can override this in cassandra.yaml
WARN 21:21:42,302 Only 45.202GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
All host(s) tried for query failed (tried: vm-cdb01/10.28.60.76:9042 (com.datastax.driver.core.exceptions.TransportException: [vm-cdb01/xx.xxx.76] Cannot connect))
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: vm-cdb01/xx.xxx.76:9042 (com.datastax.driver.core.exceptions.TransportException: [vm-cdb01/xx.xxx.76] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
at org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:73)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:80)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:48)
Exception in thread "main" org.apache.cassandra.tools.BulkLoadException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: vm-cdb01/xx.xxx.76:9042 (com.datastax.driver.core.exceptions.TransportException: [vm-cdb01/xx.xxx.76] Cannot connect))
at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:93)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:48)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: vm-cdb01/xx.xxx.76:9042 (com.datastax.driver.core.exceptions.TransportException: [vm-cdb01/xx.xxx.76] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.init(Cluster.java:163)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:334)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:309)
at com.datastax.driver.core.Cluster.connect(Cluster.java:251)
at org.apache.cassandra.utils.NativeSSTableLoaderClient.init(NativeSSTableLoaderClient.java:73)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:159)
at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:80)
... 1 more
How anyone thought sstableloader was easy enough to run, to me, seemed quite ridiculous. It makes assumptions that should be covered by program switches. It has been a while since I have run sstableloader, but because of how it works, I ended up creating a shell script to do the work - especially if you wanted to copy, say, multiple tables from multiple keyspaces to different locations. At a high level, here is how my script runs the command:
sstableloader -u ${targetUser} -pw ${cassandraCopyTargetPassword} -d ${targetHost} pwd
Everything you supply is for the target. If you are running on a different port than the default, you'll need to specify "-p ####". I've noticed you have something "off" for your "-p" (port) value - like a directory path.
Now as for what it's actually loading, that's where I think the entire process falls apart - and someone should address it as it's ridiculous what the assumptions are (again, instead of switches).
sstableloader looks which directory you're in - that has to directly match up with the keyspace and table you're TARGET table will reside.
For example, on the source, let's assume I want to copy all of sstables from, say, the /opt/cassandra/data/sourceKeyspace/sourceTable directory BUT that would be mapped to the targetKeyspace/targetTable on the TARGET system. I would need to create the directory that matches the targetKeyspace/targetTable somewhere on the source host. For example, I could create that directory as /tmp/targetKeyspace/targetTable (it doesn't have to be in /tmp, but it's as good of a place as any). I would then change directories to that location, create soft links from that directory to all of the sstables in the /opt/cassandra/data/sourceKeyspace/sourceTable) and run the sstableloader supplying the name of the target directory created above (or pwd if you're sitting in the target directory, as I do with my script). Confusing to say the least.
Again, the idea that this was a good idea on how to make it work is beyond me. Anyway, hopefully this helps you get it working.
We tried adding a new column to an existing table in Cassandra. It ended up giving an exception "org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch".
When we execute the command "describe " --> New columns was added.
when we tried to insert the data --> it throws an exception that "the newly added column does NOT exist".
We tried to recreate the table by dropping it --> Table gets dropped but while recreating it says table already exists.
Seems like some issue with Cassandra sync.
I want this issue to be resolved without any need to restart the Cassandra Nodes.
Can someone suggest the right approach to resolve this?
Thanks.
The rolling restart of the cluster resolved this issue. Thanks.
Flushing memtables (nodetool flush) should resolve the issue.
Flushing does not require restarting cassandra whereas draining does.
See:
Column family ID mismatch during ALTER TABLE
I used the sstableloader many times successfully, but I got the following error:
[root#localhost pengcz]# /usr/local/cassandra/bin/sstableloader -u user -pw password -v -d 172.21.0.131 ./currentdata/keyspace/table
Could not retrieve endpoint ranges:
java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
... 2 more
I don't know whether this error is relative to one of cluster nodes' linux crash?
Any advice will be appreciated!
Are you running different version sstable loader than the version of your cluster? Looks like https://issues.apache.org/jira/browse/CASSANDRA-9324 if using the 2.1 loader on 2.0 cluster.
/usr/local/cassandra/bin/sstableloader -u user -pw password -v -d 172.21.0.131 ./currentdata/keyspace/table into this command as mention by you backup dir is as ./currentdata/keyspace/table, change the parent directory name of table dir to the keyspace name into which you are restoring, at the place of keyspace and also change the name of table dir as these two are cassandra preserved delimeters and sstableloader consider the parent dir of backup directory(here parent->keyspace and backup-dir->table) as a keyspace name, So it should be same as the keyspace name into which you are restoring the data. Apart from this please make sure your table name and keyspace name should not be as cassandra preserved delimiters.
I realise this is an old question but I'm posting the answer here for posterity. It looks like you're hitting CASSANDRA-10700.
TL;DR - When sstableloader tries to read the schema, it fails when it comes across a dropped collections column.
The problem only exists in the sstableloader utility and you can easily workaround it by getting a copy from Cassandra 2.1.13+ as documented here. Cheers!
I'm trying to connect to a Datastax Community Edition server 2.1.2 via JDBC but I keep getting the following error no matter what I try to do, even when issuing a very basic command like select * from system_traces.events;
InvalidRequestException(why:Keyspace 'keyspace1' does not exist)
Issuing that same command via cqlsh works properly, so it seems to be a JDBC issue.
InvalidRequestException(why:Keyspace 'keyspace1' does not exist)
at org.apache.cassandra.cql.jdbc.CassandraConnection.<init>(CassandraConnection.java:229):229
at org.apache.cassandra.cql.jdbc.CassandraDriver.connect(CassandraDriver.java:92):92
at java.sql.DriverManager.getConnection(DriverManager.java:664):664
at java.sql.DriverManager.getConnection(DriverManager.java:270):270
at railo.commons.db.DBUtil.getConnection(DBUtil.java:109):109
at railo.runtime.db.DatasourceConnectionPool.loadDatasourceConnection(DatasourceConnectionPool.java:89):89
at railo.runtime.db.DatasourceConnectionPool.getDatasourceConnection(DatasourceConnectionPool.java:81):81
at railo.runtime.db.DatasourceManagerImpl.getConnection(DatasourceManagerImpl.java:65):65
at railo.runtime.tag.Query.executeDatasoure(Query.java:696):696 ...
Any ideas? TIA!
InvalidRequestException(why:Keyspace 'keyspace1' does not exist)
This exception means you are trying to query for a keyspace (in this case "Keyspace1") that hasn't yet been added to Cassandra. Try creating the keyspace before querying it.
You're probably doing a select (SELECT * FROM "Keyspace1"."Standard1") that you're not seeing or passing initialisation parameters to JDBC telling it to connect to Keyspace1. Verify that your code isn't looking for the non-existent keyspace by searching through the queries you have, specifically looking for Keyspace1 (or "Keyspace1" since in this case the keyspace name is case-sensitive).
On a side-note, "Keyspace1"."Standard1" tend to be the standard ks.cf pair used for cassandra examples so it would be good to scan your code for them to make sure that they are created before they are queried.