I am new to Cassandra and I am using version 3.11.
I have below UDT type and I am trying to write ALTER Query to change. I am able to write separate command for RENAME, ADD but I want to have all in one query. Could I get help to have in one query.
create type if not exists payment_types (
billing_type int,
payer_id text
);
ALTER TYPE payment_types to have following fields
billing_type text,
billing_id text,
biller_name text,
payer_name text
Also when I was executing i am getting
Error from server: code=2200 [Invalid query] message="Altering of types is not allowed"
You are using cassandra 3.11 version.
Here is details:
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlAlterTable.html
There are some specifications to alter data types:
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html#cql_data_types_c__cql_data_type_compatibility
Related
I'm a beginner in Cassandra and I have a table like this:
CREATE TABLE Books(
Title text PRIMARY KEY,
Authors set<text>,
Family set <text>,
Publisher text,
Price decimal
);
(the other options are missing because it's only an example)
now I would like to execute this query:
DELETE Price FROM Books WHERE Authors CONTAINS 'J.K. Rowling' IF EXISTS;
But it doesn't work. I searched on Google but found nothing.
Hope somebody can help me and sorry if my english is not very good.
but it doesn't work.
That doesn't really give us enough information to help you. Usually, you'll want to provide an error message. I built your table locally, inserted data, and tried your approach. This is the error that I see:
InvalidRequest: Error from server: code=2200 [Invalid query]
message="Some partition key parts are missing: title"
DELETE requires that the appropriate PRIMARY KEY components be specified in the WHERE clause. In your case, Authors is not part of the PRIMARY KEY definition. Given the error message returned (and the table definition) specifying title is the only way to delete rows from this table.
aploetz#cqlsh:stackoverflow> DELETE FROM Books
WHERE title = 'Harry Potter and the Chamber of Secrets'
IF EXISTS;
[applied]
-----------
True
Can I do a query like this? UPDATE Books SET Family = Family + {'Fantasy'} WHERE Authors CONTAINS 'J.K. Rowling';
No. This fails for the same reason. Writes in Cassandra (INSERTs, UPDATEs, DELETEs are all writes) require the primary key (specifically, the partition key) in the WHERE clause. Without that, Cassandra can't figure out which node holds the data, and it needs that to perform the write.
I am using cassandra database cassandra:2.2.3 in my application and i should use like operator to get all the names starts with 'A%'.
CREATE CUSTOM INDEX empnames_idx ON d.emp (ename)
USING 'org.apache.cassandra.index.sasi.SASIIndex';
After creating the index when i run the below query :
select * from d.emp where ename like 'A%';
<Error from server: code=2000 [Syntax error in CQL query] message="line 1:49 no viable alternative at input 'LIKE' (...* from d.emp where ename like .....)"
It's not supported until 3.4+ but should upgrade to 3.11.latest. You may want to configure your index as well, an excellent walkthrough is here: http://www.doanduyhai.com/blog/?p=2058
How can I append a new element to a set which is in a custom type in Cassandra.
custom_type is :
CREATE TYPE custom_type (
normal_type TEXT,
set_type Set<TEXT>
);
and the table to be updated is :
CREATE TABLE test_table (
id TEXT,
my_type FROZEN<custom_type>,
clustering_key TEXT,
PRIMARY KEY ((id),clustering_key)
);
Tried below query but did not work.
#Query("update test_table set my_type.set_type = my_type.set_type + {'newelement'} where id=?1 and clustering_key=?2")
Any Idea on how to do that?
Using [cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4
When you say frozen, then the whole value is treated as one piece (blob), so you can't update parts of this field. Official documentation states:
When using the frozen keyword, you cannot update parts of a user-defined type value. The entire value must be overwritten. Cassandra treats the value of a frozen, user-defined type like a blob.
I'm trying to run the following example from here
CREATE TYPE address (
street text,
city text,
zip int
);
CREATE TABLE user_profiles (
login text PRIMARY KEY,
first_name text,
last_name text,
email text,
addresses map<text, address>
);
However, when I try to create the user_profiles table, I get the following error:
InvalidRequest: code=2200 [Invalid query] message="Non-frozen collections are not
allowed inside collections: map<text, address>
Any thoughts on why this could be happening?
I am running 2.1.8 and I get the same error message. To fix this, you need the frozen keyword:
CREATE TABLE user_profiles (
login text PRIMARY KEY,
first_name text,
last_name text,
email text,
addresses map<text, frozen <address>>
);
Frozen is necessary for UDTs (for now) as it serializes them into a single value. A similar, better example for you to follow might be the one in the User Defined Type documentation. Give that a try.
I was getting this message when I mistakenly used "string" instead of "text" in a cassandra map, like:
mymap map<bigint, string>
I followed this stackoverflow thread from google and I thought this information could save someone a few minutes of their time.
Non-frozen UDTs are not yet supported. The reason for asking the user to explicitly specify this keyword for each UDT is to be able to introduce mutable UDTs in 3.x without breaking existing code.
We have a CQL table that looks something like this:
CREATE table data (
occurday text,
seqnumber int,
occurtimems bigint,
unique bigint,
fields map<text, text>,
primary key ((occurday, seqnumber), occurtimems, unique)
)
I can query this table from cqlsh like this:
select * from data where seqnumber = 10 AND occurday = '2013-10-01';
This query works and returns the expected data.
If I execute this query as part of a LOAD from within Pig, however, things don't work.
-- Need to URL encode the query
data = LOAD 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27' USING CqlStorage();
gives
InvalidRequestException(why:seqnumber cannot be restricted by more than one relation if it includes an Equal)
at org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.prepareQuery(CqlPagingRecordReader.java:591)
at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader$RowIterator.executeQuery(CqlPagingRecordReader.java:621)
Shouldn't these behave the same? Why is the version through Pig failing where the straight cqlsh command works?
Hadoop is using CqlPagingRecordReader to try to load your data. This is leading to queries that are not identical to what you have entered. The paging record reader is trying to obtain small slices of Cassandra data at a time to avoid timeouts.
This means that your query is executed as
SELECT * FROM "data" WHERE token("occurday","seqnumber") > ? AND
token("occurday","seqnumber") <= ? AND occurday='A Great Day'
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
And this is why you are seeing your repeated key error. I'll submit a bug to the Cassandra Project.
Jira:
https://issues.apache.org/jira/browse/CASSANDRA-6151