Cassandra CQL not equal operator on any column - cassandra

Hi is there any way I can use != operator using CQL in Cassandra?
I am trying to use a != operator on my columnfamily but when I try using that it says:
cqlsh:EPCContent> select * from "MediaCategoryGroup" where "MCategoryID"!=1;
I get this error:
Invalid syntax at line 1, char 55
select * from "MediaCategoryGroup" where "MCategoryID"!=1;
^

If you look at the Cassandra SELECT syntax, you will see that the WHERE clause must be "composed of relations on the columns that are part of the PRIMARY KEY and/or have a secondary index defined on them." Does your column conform to that specification?
Just for your information this is the full list of relation operators: '=' | '<' | '>' | '<=' | '>=' | '!=' | IN | CONTAINS | CONTAINS KEY.

Go with <> instead !=. It's working for me.

Related

Cassandra find records where list is empty [duplicate]

How do I query in cassandra for != null columns.
Select * from tableA where id != null;
Select * from tableA where name != null;
Then I wanted to store these values and insert these into different table.
I don't think this is possible with Cassandra. First of all, Cassandra CQL doesn't support the use of NOT or not equal to operators in the WHERE clause. Secondly, your WHERE clause can only contain primary key columns, and primary key columns will not allow null values to be inserted. I wasn't sure about secondary indexes though, so I ran this quick test:
create table nullTest (id text PRIMARY KEY, name text);
INSERT INTO nullTest (id,name) VALUES ('1','bob');
INSERT INTO nullTest (id,name) VALUES ('2',null);
I now have a table and two rows (one with null data):
SELECT * FROM nullTest;
id | name
----+------
2 | null
1 | bob
(2 rows)
I then try to create a secondary index on name, which I know contains null values.
CREATE INDEX nullTestIdx ON nullTest(name);
It lets me do it. Now, I'll run a query on that index.
SELECT * FROM nullTest WHERE name=null;
Bad Request: Unsupported null value for indexed column name
And again, this is done under the premise that you can't query for not null, if you can't even query for column values that may actually be null.
So, I'm thinking this can't be done. Also, if null values are a possibility in your primary key, then you may want to re-evaluate your data model. Again, I know the OP's question is about querying where data is not null. But as I mentioned before, Cassandra CQL doesn't have a NOT or != operator, so that's going to be a problem right there.
Another option, is to insert an empty string instead of a null. You would then be able to query on an empty string. But that still doesn't get you past the fundamental design flaw of having a null in a primary key field. Perhaps if you had a composite primary key, and only part of it (the clustering columns) had the possibility of being empty (certainly not part of the partitioning key). But you'd still be stuck with the problem of not being able to query for rows that are "not empty" (instead of not null).
NOTE: Inserting null values was done here for demonstration purposes only. It is something you should do your best to avoid, as inserting a null column value WILL create a tombstone. Likewise, inserting lots of null values will create lots of tombstones.
1) select * from test;
name | id | address
------------------+----+------------------
bangalore | 3 | ramyam_lab
bangalore | 4 | bangalore_ramyam
bangalore | 5 | jasgdjgkj
prasad | 11 | null
prasad | 12 | null
india | 6 | karnata
india | 7 | karnata
ramyam-bangalore | 3 | jasgdjgkj
ramyam-bangalore | 5 | jasgdjgkj
2)cassandra does't support null values selection.It is showing null for our understanding.
3) For handling null values use another strings like "not-available","null",then we can select data

Conversion of bigint unsigned column to bigint signed fails

I get a syntax error in a MySql database version 7.0
SELECT
r.id,
r.number,
r.numbertype,
r.forhandler,
LAG(r.number) OVER (PARTITION BY r.numbertype ORDER BY r.number) AS last_row_number,
LEAD(r.number) OVER (PARTITION BY r.numbertype ORDER BY r.number) AS next_row_number,
r.number -(LAG(r.number) OVER (PARTITION BY r.numbertype ORDER BY r.number)) AS gap_last_rk,
CAST (r.number-(LEAD(r.number) OVER (PARTITION BY r.numbertype ORDER BY r.`number`)) AS BIGINT SIGNED) AS gap_next_rk
FROM admin.numberranges r
WHERE r.status=2
ORDER BY r.number;
The syntax error is in my CAST part. My column NUMBER that is a BIG INT UNSIGNED.
I tried convert as well -:(
Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'BIGINT SIGNED) AS neg_number
First, you have a space after CAST, which can result in other parse errors/issues with your question. You have to use CAST(...). Second, the type BIGINT SIGNED is not allowed, check the list for CAST(expr AS type). When you want a signed number you use the type SIGNED or SIGNED INTEGER, as described in the documentation:
The type can be one of the following values:
[...]
SIGNED [INTEGER]
See the following queries on how to use the CAST() function (examples run on MySQL 8.0.23, the result might not be the same for MariaDB but the type restrictions are similar, see the MySQL documentation of CONVERT(expr, type)):
mysql> EXPLAIN Dummy;
+-------+-----------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-----------------+------+-----+---------+-------+
| Test | bigint unsigned | YES | | NULL | |
+-------+-----------------+------+-----+---------+-------+
1 row in set (0.01 sec)
mysql> SELECT Test, CAST(Test AS BIGINT SIGNED) FROM Dummy;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax
to use near 'BIGINT SIGNED) FROM Dummy' at line 1
mysql> SELECT Test, CAST(Test AS SIGNED) FROM Dummy;
+------+----------------------+
| Test | CAST(Test AS SIGNED) |
+------+----------------------+
| 1234 | 1234 |
+------+----------------------+
1 row in set (0.00 sec)

SparkSQL Column Query not showing column contents?

I have created a persistant table via df.saveAsTable
When I run the following query I receive these results
spark.sql("""SELECT * FROM mytable """).show()
I get view of the DataFrame and all of it's columns, and all of the data.
However when I run
spark.sql("""SELECT 'NameDisplay' FROM mytable """).show()
I receive results that look like this
| NameDisplay|
|--|
| NameDisplay |
| NameDisplay |
| NameDisplay |
| NameDisplay |
| NameDisplay |
| NameDisplay |
NameDisplay is definitely one of the columns in the table as it's shown when I run select * - how come this is not shown in the second query?
Issue was using quotes on the column names. Needs to be escaped via backtick ``NameDisplay`
Selecting 'NameDisplay', in SQL, is selecting the literal, text "NameDisplay". In that, the result you got are in fact valid.
To select values of the "NameDisplay" column, then you must issue:
"SELECT NameDisplay FROM mytable "
Or, if you need to quote it (maybe in case the column was created like this or has spaces, or is case-sensitive):
"""SELECT `NameDisplay` FROM mytable"""
This is SQL syntax, nothing specific to Spark.

How to do negation for 'CONTAINS'

I have Cassandra table with one column defined as set.
How can I achieve something like this:
SELECT * FROM <table> WHERE <set_column_name> NOT CONTAINS <value>
Proper secondary index in was already created.
From the documentation:
SELECT select_expression FROM keyspace_name.table_name WHERE
relation AND relation ... ORDER BY ( clustering_column ( ASC | DESC
)...) LIMIT n ALLOW FILTERING
then later:
relation is:
column_name op term
and finally:
op is = | < | > | <= | > | = | CONTAINS | CONTAINS KEY
So there's no native way to perform such query. You have to workaround by designing a new table to specifically satisfy this query.

Cassandra CQL2: Unable to update a column

Hi i have a columnfamily in cassandra db and when i check the contents of table it is shown differently when used as
./cqlsh -2
select * from table1;
KEY,31281881-1bef-447a-88cf-a227dae821d6 | A,0xaa| Cidr,10.10.12.0/24 | B,0xac | C,0x01 | Ip,10.10.12.1 | D,0x00000000 | E,0xace | F,0x00000000 | G,0x7375626e657431 | H,0x666230363 | I,0x00 | J,0x353839
While output is as this for
./cqlsh -3
select * from table1;
key | Cidr | Ip
--------------------------------------+---------------+------------
31281881-1bef-447a-88cf-a227dae821d6 | 10.10.12.0/24 | 10.10.12.1
This value is inserted using java program running.
I want to suppose update value of coulmn "B" which is only seen when using -2 option manually in database, it gives me error that it is hex value.
I am using this command to update but always getting error
cqlsh:sdnctl_db> update table1 SET B='0x7375626e657431' where key='31281881-1bef-447a-88cf-a227dae821d6';
Bad Request: cannot parse '0x7375626e657431' as hex bytes
cqlsh:sdnctl_db> update table1 SET B=0x7375626e657431 where key='31281881-1bef-447a-88cf-a227dae821d6';
Bad Request: line 1:30 no viable alternative at input 'x7375626e657431'
cqlsh:sdnctl_db> update table1 SET B=7375626e657431 where key='31281881-1bef-447a-88cf-a227dae821d6';
Bad Request: line 1:37 mismatched character '6' expecting '-'
I need to insert the hex value only which will be picked by the application but not able to insert.
Kindly help me in correcting the syntax.
It depends on what data type does column B have. Here is the reference of the CQL data types. The documentation says that blob data is represented as a hexadecimal string, so my assumption is that your column B is also a blob. From this other cassandra question (and answer) you see how to insert strings into blobs.
cqlsh:so> CREATE TABLE test (a blob, b int, PRIMARY KEY (b));
cqlsh:so> INSERT INTO test(a,b) VALUES (textAsBlob('a'), 0);
cqlsh:so> SELECT * FROM test;
b | a
---+------
0 | 0x61
cqlsh:so> UPDATE test SET a = textASBlob('b') WHERE b = 0;
cqlsh:so> SELECT * FROM test;
b | a
---+------
0 | 0x62
In your case, you could convert your hex string to a char string in code, and then use the textAsBlob function of cqlsh.
If the data type of column B is int instead, why don't you convert the hex number to int before executing the insert?

Resources