How to read cassandra data with out case sensitive - cassandra

I need to get the data from cassandra with out case sensitive. Please help me.

There is no case-sensitivity concept in Cassandra. All the data is stored as byte[], so it's not even a String.
You can make a custom comparator (see the API) which transforms byte[] to String and disregards case.
The other thing to do is just get the data and transform it on the client side.
Actually, your question is quite unclear as of what is your goal, so I can't give more details.
Update: Run a one-time job that fetches all records from the db and updates them, setting to lower-case. Then continue inserting everything with lowercase.

This has been resolved if you have SOLR enabled using:
CREATE SEARCH INDEX ON tableName WITH COLUMNS *, camelCaseColumn { lowerCase : true };
An index is created that allows the select statement to use lowercase in the where clause. For more details search for LowerCaseStrField.

Related

Cassandra using 'like' in query condition

When I query from Cassandra with a CQL statement of:
select * from abctpl where tpl like '1-1'
In the table, the content of tpl which I want is '1-1-1', and it's unique.
But actually I get 3 rows. The other two tpls do not contain a string '1-1-1', I guess Cassandra regard '-' as a wildcard character. If tpl's word like '11111111' also can be selected.
So how can I edit the CQL to make it query the exact data?
select * from abctpl where tpl like '1-1';
I think the problem here, is that you're not providing the LIKE wildcard character %. If your SASI index is defaulted to PREFIX mode, then this should work:
select * from abctpl where tpl like '1-1%';
Take a look through the DataStax docs on using SASI indexes: https://docs.datastax.com/en/dse/6.7/cql/cql/cql_using/useSASIIndex.html . That has some query examples, along with how to specify the mode at index creation.
make it query the exact data?
And if it's exact data that you're after, using equals (=) does a better job of that than LIKE does.

U-SQL Error - Change the identifier to use at least one lower case letter

I am fairly new to U-SQL and trying to run a U-SQL script in Azure Data Lake Analytics to process a parquet file using the Parquet extractor functionality. I am getting the below error and I don't find a way to get around it.
Error - Change the identifier to use at least one lower case letter. If that is not possible, then escape that identifier (for example: '[ACTIVITY]'), or embed it in a CSHARP() block (e.g CSHARP(ACTIVITY)).
Unfortunately all the different fields generated in the Parquet file are capitalized and I don't want to to escape these identifiers. I have tried if I could wrap the identifier with CSHARP block and it fails as well (E_CSC_USER_RESERVEDKEYWORDASIDENTIFIER: Reserved keyword CSHARP is used as an identifier.) Is there anyway I could extract the parquet file? Thanks for your help!
Code Snippet:
SET ##FeaturePreviews = "EnableParquetUdos:on";
#var1 =
EXTRACT ACTIVITY string,
AUTHOR_NAME string,
AFFLIATION string
FROM "adl://xxx.azuredatalakestore.net/Abstracts/FY2018_028"
USING Extractors.Parquet();
#var2 =
SELECT *
FROM #var1
ORDER BY ACTIVITY ASC
FETCH 5 ROWS;
OUTPUT #var2
TO "adl://xxx.azuredatalakestore.net/Results/AbstractsResults.csv"
USING Outputters.Csv();
Based on your description you try to say
EXTRACT ALLCAPSNAME int FROM "/data.parquet" USING Extractors.Parquet();
In U-SQL, we reserve all caps identifiers so we can add new keywords in the future without invalidating old scripts.
To work around, you just have to quote the name (escape it) like in any other SQL dialect:
EXTRACT [ALLCAPSNAME] int FROM "/data.parquet" USING Extractors.Parquet();
Note that this is not changing the name of the field. It is just the syntactic way to address the field.
Also note, that in most SQL communities, it is considered a best practice to always quote identifiers to avoid reserved keyword clashes.
If all fields in the Parquet file are all caps, you will have to quote them all... In a future update you will be able to say EXTRACT * FROM … for Parquet (and Orc) files, but you still will need to quote the columns when you refer to them explicitly.

How do I make a WHERE clause with SQLalchemy to compare to a string?

Objective
All I am trying to do is retrieve a single record from a specific table where the primary key matches. I have a feeling I'm greatly over complicating this as it seems to be a simple enough task. I have a theory that it may not know the variable value because it isn't actually pulling it from the Python code but instead trying to find a variable by the same name in the database.
EDIT: Is it possible that I need to wrap my where clause in an expression statement?
Attempted
My Python code is
def get_single_record(name_to_search):
my_engine = super_secret_inhouse_engine_constructor("sample_data.csv")
print("Searching for " + name_to_search)
statement = my_engine.tables["Users"].select().where(my_engine.tables["Users"].c.Name == name_to_search)
# Print out the raw SQL so we can see what exactly it's checking for
print("You are about to run: " + str(statement))
# Print out each result (should only be one)
print("Results:")
for item in my_engine.execute(statement):
print(item)
I tried hard coding a string in its place.
I tried using like instead of where.
All to the same end result.
Expected
I expect it to generate something along the lines of SELECT * FROM MyTable WHERE Name='Todd'.
Actual Result
Searching for Todd
STATEMENT: SELECT "Users"."Name", ...
FROM "Users"
WHERE "Users"."Name" = ?
That is an actual question mark appearing my statement, not simply my own confusion. This is then followed by it printing out a collection of all the records from the table, as though it successfully matched everything.
EDIT 2: Running either my own hard coded SQL string or the generated query by Alchemy returns every record from the table. I'm beginning to think the issue may be with the engine I've set up not accepting the query.
Why I'm Confused
According to the official documentation and third party sources, I should be able to compare to hardcoded strings and then, by proxy, be able to compare to a variable.

fetching compact version of JSONB in PostgreSQL

How to fetch compact JSONB from PostgreSQL?
All I got when fetching is with spaces:
SELECT data FROM a_table WHERE id = 1; -- data is JSONB column
{"unique": "bla bla", "foo": {"bar": {"in ...
^ ^ ^ ^ ^ --> spaces
What I want is:
{"unique":"bla bla","foo":{"bar":{"in ...
json_strip_nulls() does exactly what you're looking for:
SELECT json_build_object('a', 1);
returns
{"a" : 1}
But
SELECT json_strip_nulls(json_build_object('a', 1));
returns
{"a":1}
This function not only strips nulls as indicated by its function name and as documented, but incidentally also strips insignificant whitespace. The latter is not explicitly documented in PostgreSQL manual.
Tested in PostgreSQL 11.3, but probably works with earlier versions too.
jsonb is rendered in a standardized format on output. You would have to use json instead to preserve insignificant white space. Per documentation:
Because the json type stores an exact copy of the input text, it will
preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the
key/value pairs are kept. (The processing functions consider the last
value as the operative one.) By contrast, jsonb does not preserve
white space, does not preserve the order of object keys, and does not
keep duplicate object keys.
The whitespace really shouldn't matter for JSON values.
There is a discussion, started in 2016, about a function jsonb_compact() that will solve the problem... But, it could take years (!).
Pretty solution
  (a real solution for this question and this other one)
We must to agree with the PostgreSQL's convention for CAST(var_jsonb AS text). When you need another cast convention, for example to debug or human-readable output, the built-in jsonb_pretty() function is a good choice.
Unfortunately PostgreSQL not offers other choices, like the compact one. So, you can overload jsonb_pretty() with a compact option:
CREATE or replace FUNCTION jsonb_pretty(
jsonb, -- input
compact boolean -- true for compact format
) RETURNS text AS $$
SELECT CASE
WHEN $2=true THEN json_strip_nulls($1::json)::text
ELSE jsonb_pretty($1)
END
$$ LANGUAGE SQL IMMUTABLE;
SELECT jsonb_pretty( jsonb_build_object('a',1, 'bla','bla bla'), true );
-- results {"a":1,"bla":"bla bla"}
Rationale
The JSON standard, RFC 8259 says "... Insignificant whitespace is allowed before or after any of the six structural characters". In other words, the cast from jsonb datatype to text has no canonical form. The PostgreSQL cast convention (using spaces) is arbitrary.
A lot of applications need to minimize a big JSONb output. Two typical ones: minimizing file size of a big JSONb saved by pg_file_write(); output online in a REST interface.
The PostgreSQL team must to appreciate a real CAST procedure, not a parser, but a direct text production from JSONb internal representation.
The workaround — to remove spaces from "JSON text" — is not a simple task, it need a good parser to avoid tampering content. The solution is a parser, it is not a regular expression workaround... And in nowadays the built-in parser is json_strip_nulls(), even as "incidential behavior" parser.

Neo4j - index lookup issue

I was trying to set index type from exact to fulltext in neo4j shell, so i can do incasesensitive search with lucene query. So i used this command:
index --set-config Destination type fulltext
but it didn't work. Still couldn't do case insensitive search, so a played around and change some other values, like _blueprints:type and to_lower_case.
That didn't do any good.
Now it somehow ignores first character of name value ( weird ! ) . So if i am searching for "London" for example and i type "Lon" it returns nothing. But if i type "ond" it returns the node. The same for every node.
I tried setting everything back to normal. Didn`t help.
What did i mess up? What am i missing?
I am using a Everyman PHP library to communicate with database.
I created new index with "to_lower_case" property.
I think that will solve my problem, just have to convert string to lower case before inserting it into query. It seems to work.
Setting configuration afterwards doesn't update already indexed values (as the shell notes, I think). If you've created your index with "to_lower_case=true" then additions as well as queries will have the values converted to lower case. Calling Index#get will still require you to lower-case it yourself.

Resources