I use Cassandra 3.7 and have a text column with SASI index.
Let's assume that I want to find column values that contain '%' character somewhere in the middle.
The problem is that '%' is a command char for LIKE clauses.
How to escape '%' char in a query like LIKE '%%%'?
Here is a test script:
DROP keyspace if exists kmv;
CREATE keyspace if not exists kmv WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor':'1'} ;
USE kmv;
CREATE TABLE if not exists kmv (id int, c1 text, c2 text, PRIMARY KEY(id, c1));
CREATE CUSTOM INDEX ON kmv.kmv ( c2 ) USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {
'analyzed' : 'true',
'analyzer_class' : 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer',
'case_sensitive' : 'false',
'mode' : 'CONTAINS'
};
INSERT into kmv (id, c1, c2) values (1, 'f22', 'qwe%asd');
SELECT c2 from kmv.kmv where c2 like '%$$%$$%';
The select query returns nothing.
I think you can use the $$ syntax to achieve this. Your where clause would be:
LIKE '%$$%$$%'
Source: https://docs.datastax.com/en/cql/3.3/cql/cql_reference/escape_char_r.html
The percent sign is treated as special characters only at the beginning and the end. So LIKE '%%%' works fine for your case.
cqlsh:kmv> SELECT c2 from kmv.kmv where c2 like '%%%';
c2
----------
qwe%asd
Looking at the source, however, I don't think there is a way to escape the percent sign if it's the first or the last character, which means you can't do like queries to find values that start with %.
Related
a is value entered by user.
cur.execute('SELECT * FROM table WHERE Column-name LIKE ?', ('%{}%'.format(a,))
The database must show value matching with entered value. It shows for one column only. I want to retrieve record for more than one column. Need to use OR operator but not sure the syntax. Could anyone plz help me.
Just expand the where clause with more conditions, separated by OR - accordingly, you need to repeat the bind parameters:
cur.execute('SELECT * FROM table WHERE col1 LIKE ? OR col2 LIKE ?', ('%{}%'.format(a, a))
The image shows error detail- Please click Here
I am trying to Insert large JSON in a column of Cassandra Table.
Table Schema looks like :
Table1 (EmployeeName text, EmployeeID text, EmployeeJSON text )
INSERT INTO Table1 (EmployeeName, EmployeeID, EmployeeJSON)
VALUES ('Razzaq','234',"Jason String")
NOte : JSON string is huge one. It has size of 212k. How can I insert that into this table. Either Should I Use same method or something else?
You could insert it using fromJson() function for a single column value.
It may only be used in the VALUES clause of an INSERT statement or as one of the column values in an UPDATE, DELETE, or SELECT statement. For example, it cannot be used in the selection clause of a SELECT statement.
Example:
Table1 (EmployeeName text, EmployeeID text, EmployeeJSON text )
INSERT INTO Table1 (EmployeeName, EmployeeID, EmployeeJSON)
VALUES ('Razzaq','234',fromJson('{
"employeeCompany" : "Acme Corp",
"employeeCountry" : "Egypt",
"employeeSalary" : [{
"currency" : "Dollar",
"salaryVariance" : { "cashPay" : 90%, "equity" : 10% }
}]
}'))
Json Support in Cassandra
In Sqlite we have table1 with column column1
there are 4 rows with following values for column1
(p1,p10,p11,p20)
DROP TABLE IF EXISTS table1;
CREATE TABLE table1(column1 NVARCHAR);
INSERT INTO table1 (column1) values ('p1'),('p10'),('p11'),('p20');
Select instr(',p112,p108,p124,p204,p11,p1124,p1,p10,p20,',','+column1+',') from table1;
We have to get the position of each value of column1 in the given string:
,p112,p108,p124,p204,p11,p1124,p1,p10,p20,
the query
Select instr(',p112,p108,p124,p204,p11,p1124,p1,p10,p20,',column1) from table1;
returns values
(2,7,2,17)
which is not what we want
the query
Select instr(',p112,p108,p124,p204,p11,p1124,p1,p10,p20,',','+column1+',') from table1;
returns 9 for all rows -
it turned out that it is the position of first "0" symbol ???
Howe we can get the exact positions of column1 in the given string in sqlite ??
In SQLite the concatenation operator is || and not + (like SQL Server), so do this:
Select instr(',p112,p108,p124,p204,p11,p1124,p1,p10,p20,',',' || column1 || ',') from table1;
What you did with your code was number addition which resulted to 0, because none of the string operands could succesfully be converted to number,
so instr() was searching for '0' and found it always at position 9 of the string:',p112,p108,p124,p204,p11,p1124,p1,p10,p20,'.
i have a column in cassandra database as map<text,text>
I insert the data in this table as :
INSERT INTO "Table1" (col1) VALUES ({'abc':'abc','hello':'world','flag':'true'});
So, in my code i can get the data as :
{
"abc":"abc",
"hello":"world",
"flag":"true"
}
But, now i want this like :
{
"abc":"abc",
"hello":"world",
"flag":{
"data":{ "hi":"cassandra"},
"working":"no"
}
}
For this, when I try the insert query, it says that it does not match the type map<text,text>
How can I make this work ?
The problem here (in your second example) is that the type of col1 is a map<text,text> but flag is a complex type and no longer matches that definition. One way to solve this would be to create individual TEXT columns for each property, as well as a user defined type for flag and the data it contains:
> CREATE TYPE flagtype (data map<text,text>,working text);
> CREATE TABLE table1 (abc text,
hello text,
flag frozen<flagtype>
PRIMARY KEY (abc));
Then INSERTing the JSON text from your second example works.
> INSERT INTO table1 JSON '{"abc":"abc",
"hello":"world",
"flag":{"data":{"hi":"cassandra"},
"working":"no"}}';
> SELECT * FROM table1;
abc | flag | hello
-----+--------------------------------------------+-------
abc | {data: {'hi': 'cassandra'}, working: 'no'} | world
(1 rows)
If you are stuck on using the map<text,text> type, and want the value JSON sub properties to be treated a large text string, you could try a simple table like this:
CREATE TABLE stackoverflow.table2 (
key1 text PRIMARY KEY,
col1 map<text, text>);
And on your INSERTs just escape out the inner quotes:
> INSERT INTO table2 JSON '{"key1":"1","col1":{"abc":"abc","hello":"world"}}';
> INSERT INTO table2 JSON '{"key1":"2","col1":{"abc":"abc","hello":"world",
"flag":"{\"data\":{\"hi\":\"cassandra\"},\"working\":\"no\"}"}}';
> SELECT * FROm table2;
key1 | col1
------+----------------------------------------------------------------------------------------
2 | {'abc': 'abc', 'flag': '{"data":{"hi":"cassandra"},"working":"no"}', 'hello': 'world'}
1 | {'abc': 'abc', 'hello': 'world'}
(2 rows)
That's a little hacky and will probably require some additional parsing on your application side. But it gets you around the problem of having to define each column.
Is it possible in cassandra map to input different data types like if I have a table like
(id int, value map<text,text>)
Now I want to insert values in this table like
(1,{'test':'test1'})
(2,{'a':1})
(3,{'c':2})
The Cassandra Map type does not support values (or keys) of differing types. However, you could create a User Defined Type to handle that.
aploetz#cqlsh:stackoverflow2> CREATE TYPE testac (test text, a int, c int);
aploetz#cqlsh:stackoverflow2> CREATE TABLE testactable (
key int,
values frozen<testac>,
PRIMARY KEY (key));
aploetz#cqlsh:stackoverflow2> INSERT INTO testactable (key,values)
VALUES (1,{test: 'test1', a: 1, c: 2});
aploetz#cqlsh:stackoverflow2> SELECT * FROm testactable ;
key | values
-----+-----------------------------
1 | {test: 'test1', a: 1, c: 2}
(1 rows)
Instead of having map in you case have it as text (String) column which will save you lots of space. Keep data in JSON format by stringifying it.
No Cassandra does not support like this feature .cassandra Map is like java map and we know that java also does not support this . we have pass all value in map according to datatype .