Astyanax ALTER KEYSPACE CQL - cassandra

I'm not managing to execute a cql statement that should update the replication factor of a SimpleStrategy keyspace. It's annoying because this works fine with all three versions of CQLSH.
The keyspace context i'm using is set to use cqlv3
.setCqlVersion("3.0.0")
The cql:
"ALTER KEYSPACE \"" + ksContext.getKeyspaceName() + "\" WITH REPLICATION = { " +
"'class' : 'SimpleStrategy', 'replication_factor' : 3 };";
Stack trace:
InvalidRequestException(why:line 1:108 no viable alternative at character '}')
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:27)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$1.execute(ThriftSyncConnectionFactoryImpl.java:140)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:255)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$6.execute(ThriftColumnFamilyQueryImpl.java:694)
at smail.cli.astyanax.Astyanax.execCQL(Astyanax.java:75)
at smail.cli.astyanax.Astyanax.alterReplicationFactor(Astyanax.java:307)
at smail.cli.test.SchemaTest.alterReplicationFactor(SchemaTest.java:25)
at smail.cli.test.TestSuite.runTests(TestSuite.java:39)
at smail.cli.Main.main(Main.java:22)

Astyanax does have some problems with cql3 but its making great progress. Create a new keyspace and use CQLv2 (by setting it in the context) and try it without the " quotes:
String query = "ALTER KEYSPACE " + keyspaceName + " WITH REPLICATION = {
'class' : 'SimpleStrategy', 'replication_factor' : 2 };";
Note* in this query you are updating the rep factor to be 2.

Related

Insert Overwrite in data bricks overwriting complete data in table?

I am have two table 1 is with 50K records and other is with 2.5K records and I want to update this 2.5K records into table one. Currently I was doing this by using INSERT OVERWRITE statement in spark Mapr cluster. And I want to do same in azure databricks. Where I created 2 tables in databricks and read data from on-prem servers into azure then using INSERT OVERWRITE statement. But I was doing this my previus/History data was completely replaced with new data.
In Mapr cluster
src_df_name.write.mode("overwrite").format("hive").saveAsTable(s"cs_hen_mbr_stg") //stage table with 2.5K records.
spark.sql(s"INSERT OVERWRITE TABLE cs_hen_mbr_hist " +
s"SELECT NAMED_STRUCT('INID',stg.INID,'SEG_NBR',stg.SEG_NBR,'SRC_ID',stg.SRC_ID, "+
s"'SYS_RULE',stg.SYS_RULE,'MSG_ID',stg.MSG_ID, " +
s"'TRE_KEY',stg.TRE_KEY,'PRO_KEY',stg.PRO_KEY, " +
s"'INS_DATE',stg.INS_DATE,'UPDATE_DATE',stg.UPDATE_DATE,'STATUS_KEY',stg.STATUS_KEY) AS key, "+
s"stg.MEM_KEY,stg.INDV_ID,stg.MBR_ID,stg.SEGO_ID,stg.EMAIL, " +
s"from cs_hen_mbr_stg stg" )
By doing above in mapr cluster I was able to update values.But i was trying same in azure databricks My history data is getting lost.
In Databriks
val VW_HISTORY_MAIN=spark.read.format("parquet").option("header","true").load(s"${SourcePath}/VW_HISTORY")
VW_HISTORY_MAIN.write.mode("overwrite").format("hive").saveAsTable(s"demo.cs_hen_mbr_stg") //writing this to table in databricks.
spark.sql(s"INSERT OVERWRITE TABLE cs_hen_mbr_hist " +
s"SELECT NAMED_STRUCT('INID',stg.INID,'SEG_NBR',stg.SEG_NBR,'SRC_ID',stg.SRC_ID, "+
s"'SYS_RULE',stg.SYS_RULE,'MSG_ID',stg.MSG_ID, " +
s"'TRE_KEY',stg.TRE_KEY,'PRO_KEY',stg.PRO_KEY, " +
s"'INS_DATE',stg.INS_DATE,'UPDATE_DATE',stg.UPDATE_DATE,'STATUS_KEY',stg.STATUS_KEY) AS key, "+
s"stg.MEM_KEY,stg.INDV_ID,stg.MBR_ID,stg.SEGO_ID,stg.EMAIL, " +
s"from cs_hen_mbr_stg stg" )
Why it is not working with databricks?

Cassandra issuing an error while selecting the data in table "NoHostAvailable:"

I have created the keyspace and also created a table using Cassandra 3.0 server. I am using the 3 nodes architecture. And three of the servers are working and able to connect the 3 nodes. However when i insert or selecting the data using the CQL, Its showing the error saying that "NoHostAvailable:". Please could anyone provide me the reason and solution for this issue.
Topology
nodetool status output
UN 172.30.1.7 230.22 KB 256 ? 2103dcd3-f09b-47da-a187-bf28b42b918e rack1
DN 172.30.1.20 ? 256 ? 683db65d-0836-40e4-ab5b-fa0db20bae30 rack1
DN 172.30.1.2 ? 256 ? 2b1f15d1-2f92-41ef-a03e-0e5f5f578cf4 rack1
Schema
Keyspace
CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 2};
Table
CREATE TABLE testrep(id INT PRIMARY KEY);
Note that from nodetool status, 2 out of your 3 node cluster is down(DN).
You might be inserting with a Consistency Level that cannot be satisfied.
nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 237.31 MiB 256 ? 3c8a8d8d-992c-4b7c-a220-6951e37870c6 rack1
cassandra#cqlsh> create KEYSPACE qqq WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 2};
cassandra#cqlsh> use qqq;
cassandra#cqlsh:qqq> CREATE TABLE testrep(id INT PRIMARY KEY);
cassandra#cqlsh:qqq> insert into testrep (id) VALUES ( 1);
cassandra#cqlsh:qqq> CONSISTENCY
Current consistency level is ONE.
cassandra#cqlsh:qqq> CONSISTENCY TWO ;
Consistency level set to TWO.
cassandra#cqlsh:qqq> insert into testrep (id) VALUES (2);
NoHostAvailable:
cassandra#cqlsh:qqq> exit

cassandra keyspace with capital letter

Using the framework Cassandra Achilles, it generates the CQL below but it works only when the keyspace name is lowercase
Session session = cluster.connect();
session.execute("DROP KEYSPACE IF EXISTS Smart;");
session.execute("CREATE KEYSPACE Smart WITH replication = { "
+ " 'class': 'SimpleStrategy', 'replication_factor': '3' }; ");
session.execute("CREATE TYPE IF NOT EXISTS smart.bio_udt("+
"birthplace text,"+
"diplomas list<text>,"+
"description text);");
session.execute("CREATE TABLE IF NOT EXISTS Smart.users("+
"id bigint,"+
"age_in_year int,"+
"bio frozen<\"Smart\".bio_udt>,"+
"favoritetags set<text>,"+
"firstname text,"+
"lastname text,"+
"preferences map<int, text>,"+
"PRIMARY KEY(id))");
with the error :
com.datastax.driver.core.exceptions.InvalidQueryException:
Statement on keyspace smart cannot refer to a user type in keyspace Smart;
user types can only be used in the keyspace they are defined in
what is the problem ?
The issue is that the check for the UDT creation query that is failing is case sensititive, where as all the other queries are not. Because you have not provided the "Smart" syntax, Cassandra thought you really meant "smart" with all lower case.
So if you write your final query to work, all you have to do is to write it like this:
CREATE TABLE IF NOT EXISTS Smart.users(
id bigint,
age_in_year int,
bio frozen<"smart".bio_udt>,
favoritetags set<text>,
firstname text,lastname text,
preferences map<int, text>,
PRIMARY KEY(id)
);
You have several options actually, you can use smart, Smart, "smart", but not "Smart", because the first three are referring to the same thing, namely smart, the last variant however is telling Cassandra "I'm looking for a keyspace with this exact casing, starting with capital S.
Without the quote notation, Cassandra thinks you meant case insensitive keyspaces, which will make it all lowercase by default.
As proof, try this in cqlsh:
cqlsh> CREATE KEYSPACE THISISUPPER WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '3' };
cqlsh> DESCRIBE KEYSPACE thisisupper ;
CREATE KEYSPACE thisisupper WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true;
If you really want to have it all case sensitive use quotes, and then you won't be able to access it unless you input the exact name of the keyspace.
cqlsh> CREATE KEYSPACE "HEYAQUOTES" WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '3' };
cqlsh> DESCRIBE KEYSPACE heyquotes;
Keyspace 'heyquotes' not found.
cqlsh> DESCRIBE KEYSPACE "heyaquotes";
Keyspace 'heyaquotes' not found.
cqlsh> DESCRIBE KEYSPACE HEYAQUOTES;
Keyspace 'heyaquotes' not found.
cqlsh> DESCRIBE KEYSPACE "HEYAQUOTES";
CREATE KEYSPACE "HEYAQUOTES" WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true;

Kundera Cassandra Delete a row based on Indexed column

How to delete rows in cassandra based on an indexed column ?
Tried:
upload_id is added as an index in the table.
Delete from table where upload_id = '"+uploadId+"'"
But this gives me an Error "NON PRIMARY KEY found in where clause".
String selectQuery = "Select hashkey from table where upload_id='" + uploadId + "'"
entityManager.createNativeQuery(selectQuery).getResultList()
and delete all the elements in the List using a for loop.
This query is changed by kundera to append LIMIT 100 ALLOW Filtering.
Found a Question similar to this at Kundera for Cassandra - Deleting record by row key but that was asked in 2012 after that there were a lot of changes to cassandra and Kundera.
Kundera by default uses LIMIT 100. You can use query.setMaxResults(<integer>) to modify the limit accordingly and then run the loop.
Example:
Query findQuery = entityManager.createQuery("Select p from PersonCassandra p where p.age = 10", PersonCassandra.class);
findQuery.setMaxResults(5000);
List<PersonCassandra> allPersons = findQuery.getResultList();

Inserting data in table with umlaut is not possible

I am using Cassandra 1.2.5 (cqlsh 3.0.2) and trying to inserting data in a small test-database with german characters which is not possible. I get back the message from cqlsh: "Bad Request: Input length = 1"
below is the setup of the keyspace, the table and the insert.
CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
use test;
CREATE TABLE testdata (
id varchar,
text varchar,
PRIMARY KEY (id)
This is working:
insert into testdata (id, text) values ('4711', 'test');
This is not allowed:
insert into testdata (id, text) values ('4711', 'töst`);
->Bad Request: Input length = 1
my locale is :de_DE.UTF-8
Does Cassandra 1.2.5 has a problem with Umlaut ?
I just did what you posted and it worked for me. The one thing that was different however, is that instead of a single quote, you finished 'töst` with a backtick. That doesn't allow me to finish the statement in cqlsh. When I replace that with 'töst' it succeeds and I get:
cqlsh:test> select * from testdata;
id | text
------+------
4711 | töst

Resources