How do I escape the ampersand character (&) in cql? - cassandra

I am inserting a statement into a table that looks something like this:
insert into db.table (field1, field2) values (1, 'eggs&cheese')
but when i later query this error on our servers, my query returns:
eggs\u0026cheese instead.
Not sure whether to use \ or '
If anyone can help, that would be great. Thank you!

This doesn't appear to be a problem with CQL but the way your app displays the value.
For example, if the CQL column type is text, the unicode character is encoded as a UTF-8 string.
Using this example schema:
CREATE TABLE unicodechars (
id int PRIMARY KEY,
randomtext text
)
cqlsh displays the ampersand as expected:
cqlsh> SELECT * FROM unicodechars ;
id | randomtext
----+-------------
1 | eggs&cheese

Related

How to LOWER elements in postgresql array in SELECT statement to do case insensitive search

Can I lower the elements in an array column like I can a varchar or text column?
SELECT * FROM a_tbl a
WHERE LOWER(%(uw)s) = ANY (a.n_lst) -- how to lower?
OR LOWER(a.x) = LOWER(%(uw)s)
I saw in the documentation that I could use citext as my datatype. I might try to alter my table if there isn't a way to use LOWER
a_tbl
-----
id SERIAL PRIMARY KEY
n_lst TEXT[]
x VARCHAR(50)
You can do like below:
select lower('Jonh') = any(lower(n_lst::text)::text[]) from a_tbl;
How to lowercase postgresql array?
SELECT * FROM a_tbl a
WHERE LOWER(%(uw)s) ILIKE ANY (a.n_lst) -- how to lower?
OR LOWER(a.x) = LOWER(%(uw)s)

Turning a Comma Separated string into individual rows in Teradata

I read the post:
Turning a Comma Separated string into individual rows
And really like the solution:
SELECT A.OtherID,
Split.a.value('.', 'VARCHAR(100)') AS Data
FROM
( SELECT OtherID,
CAST ('<M>' + REPLACE(Data, ',', '</M><M>') + '</M>' AS XML) AS Data
FROM Table1
) AS A CROSS APPLY Data.nodes ('/M') AS Split(a);
But it did not work when I tried to apply the method in Teradata for a similar question. Here is the summarized error code:
select failed 3707: expected something between '.' and the 'value' keyword. So is the code only valid in SQL Server? Would anyone help me to make it work in Teradata or SAS SQL? Your help will be really appreciated!
This is SQL Server syntax.
In Teradata there's a table UDF named STRTOK_SPLIT_TO_TABLE,
e.g.
SELECT * FROM dbc.DatabasesV AS db
JOIN
(
SELECT token AS DatabaseName, tokennum
FROM TABLE (STRTOK_SPLIT_TO_TABLE(1, 'dbc,systemfe', ',')
RETURNS (outkey INTEGER,
tokennum INTEGER,
token VARCHAR(128) CHARACTER SET UNICODE)
) AS d
) AS dt
ON db.DatabaseName = dt.DatabaseName
ORDER BY tokennum;
Or see my answer to this similar question

How should I insert decimal values via cqlsh (DecimalSerializer.deserialize error when attempting)?

In cqlsh I'm trying to insert a record with:
INSERT INTO data
(order_id,order_ts,transaction_id,transaction_discount,transaction_qty,
transaction_total,
product_category,product_profit,product_upc,product_name,product_price,
product_distributor,
store_id,store_name,store_state,store_region,id)
values ('YBC8RWE18',1368438171000,'LQKLVVI4E', 0, 1, 140.72,
'Office Supplies', 12.42,
'YT8899H3357', 'Casio USB Model FX-9860GII', 140.72, 'QR0', '2YOYWMR28Q',
'BigLots', 'AZ',
'Southwest', 2259a88e-b62d-4625-a86e-b86d77418a34 );
Looks fine, but I'm getting a number exception:
Caused by: java.lang.NumberFormatException: Zero length BigInteger
at java.math.BigInteger.<init>(BigInteger.java:190)
at org.apache.cassandra.serializers.DecimalSerializer.deserialize(DecimalSerializer.java:41)
at org.apache.cassandra.serializers.DecimalSerializer.deserialize(DecimalSerializer.java:26)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:142)
at org.apache.cassandra.db.marshal.DecimalType.compare(DecimalType.java:46)
Looks like DecimalSerializer.deserialize is the real issue here. If I try to surround the decimals in quotes (worth a shot, I thought) I get:
Bad Request: Invalid STRING constant (140.72) for product_price of type decimal
So that didn't help. What do I need to do to insert a decimal? Should I post up the COLUMNFAMILY def?
Here's the DESCRIBE TABLE:
CREATE TABLE data (
id uuid,
order_id text,
order_ts timestamp,
product_category text,
product_distributor text,
product_name text,
product_price decimal,
product_profit decimal,
product_upc text,
store_id text,
store_name text,
store_region text,
store_state text,
transaction_discount decimal,
transaction_id text,
transaction_qty int,
transaction_total decimal,
PRIMARY KEY (id)
)
If I take off the quotes around 140.72 I get: Request did not complete within rpc_timeout. and the logs show the deserialize error. If I try to just insert a few columns it's fine -- until I try to insert the product_price field.
Haven't got any answers, but did get some help.
The answer to this one isn't very satisfying. None of the errors really clued me in to what was going on, but I thought: well, there's really only a few moving parts here: the table def and the indexes. So I dropped the table and recreated it without indexes. Fixed the issue. Recreated the indexes and the issue was still fixed. So perhaps the indexes were messed up?
Got the same exception even without index on the table. If you look into the stack trace in the constructor of BigInteger public BigInteger(byte[] val), the exception are thrown when the passed in byte array is empty. Maybe there are bugs in Cassandra driver to deserialize decimal? However, there are following comments are interesting:
Translates a byte array containing the two's-complement binary representation of a BigInteger into a BigInteger. The input array is assumed to be in big-endian byte-order: the most significant byte is in the zeroth element.
Will the Cassandra driver guarantee the passed byte array is big-endian? The solution could be use other types like double instead.

cassandra : name provided was not in the list of valid column labels error

i'm using cassandra 1.2.8. i have a column family like below:
CREATE TABLE word_probability (
word text,
category text,
probability double,
PRIMARY KEY (word,category)
);
when i use query like this:
String query = "SELECT * FROM word_probability WHERE word='%s' AND category='%s';";
it works well but for some words i get this message:
name provided was not in the list of valid column labels error
every thing is ok and i don't know why i get this error :(
You're not doing anything wrong except mixing up cql with sql. Cql doesn't support % wildcards.

How to replace one or more consecutive symbols with one symbol in DB2

I am using DB2 LUW 9.5. In a field, I have a value like this one:
Test^test^^test^^^^test^^test^test
In a SELECT query, I would like to replace the duplicated ^ with only one ^. This would produce:
Test^test^test^test^test^test
The delimiter is known and static (can be hardcoded). Would you know a way to obtain the desired output using DB2 functions?
Thank you
You need one other character that can be used as delimiter, for example the pipe sign (|).
Let's say the table is defined as
create table myTable (
myColumn varchar(400)
);
Add a value for a test:
insert into myTable (myColumn) values
('Test^^^^^^^^test^^^^^^^test^^^^^^test^^^^^test^^^^test^^^test^^test^test');
Then do a smart replacement with use of the other delimiter
select replace(replace(replace(myColumn, '^^', '^|^'), '|^^', ''), '^|^', '^')
from myTable;
The result:
Test^test^test^test^test^test^test^test^test^test
Instead of using a one character delimiter you can use a string of which you are sure it will not occur in the values, for example 'xy'. The next query will give the same results:
select replace(replace(replace(myColumn, '^^', '^xy^'), 'xy^^', ''), '^xy^', '^')
from myTable;

Resources