Datastax java driver Date Filter Issue - cassandra

I have a table with one timestamp column . When i try to execute a date filter using this timestamp column it doesn't give any results. Table structure and code segment is follows.
create table status_well
(
wid int,
data_time timestamp,
primary key (wid ,data_time)
)
SimpleDateFormat DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd");
PreparedStatement statement = session.prepare("select from status_well where data_time>? and data_time<?");
BoundStatement boundStatement=new BoundStatement(statement);
statement.setDate("data_time", DATE_FORMAT.parse("2015-05-01"));
statement.setDate("data_time", DATE_FORMAT.parse("2015-05-10"));
Data is there for the above specified date range but no data returns . I tried with a string instead of DATE_FORMAT.parse("2015-05-01") but that gives an invalid type error .
Please advise me on this.

There are two placeholders for the data_time column, so you need to use index-based setters, or named placeholders:
PreparedStatement statement = session.prepare("select * from status_well "
+ "where wid = :wid "
+ "and data_time > :min and data_time < :max");
BoundStatement boundStatement = new BoundStatement(statement)
.setInt("wid", 1)
.setDate("min", DATE_FORMAT.parse("2015-05-01"))
.setDate("max", DATE_FORMAT.parse("2015-05-10"));
(also added a restriction on wid to get a valid CQL query, as was mentioned in the comments)

Related

Inserting Timestamp Into Snowflake Using Python 3.8

I have an empty table defined in snowflake as;
CREATE OR REPLACE TABLE db1.schema1.table(
ACCOUNT_ID NUMBER NOT NULL PRIMARY KEY,
PREDICTED_PROBABILITY FLOAT,
TIME_PREDICTED TIMESTAMP
);
And it creates the correct table, which has been checked using desc command in sql. Then using a snowflake python connector we are trying to execute following query;
insert_query = f'INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) VALUES ({accountId}, {risk_score},{ct});'
ctx.cursor().execute(insert_query)
Just before this query the variables are defined, The main challenge is getting the current time stamp written into snowflake. Here the value of ct is defined as;
import datetime
ct = datetime.datetime.now()
print(ct)
2021-04-30 21:54:41.676406
But when we try to execute this INSERT query we get the following errr message;
ProgrammingError: 001003 (42000): SQL compilation error:
syntax error line 1 at position 157 unexpected '21'.
Can I kindly get some help on ow to format the date time value here? Help is appreciated.
In addition to the answer #Lukasz provided you could also think about defining the current_timestamp() as default for the TIME_PREDICTED column:
CREATE OR REPLACE TABLE db1.schema1.table(
ACCOUNT_ID NUMBER NOT NULL PRIMARY KEY,
PREDICTED_PROBABILITY FLOAT,
TIME_PREDICTED TIMESTAMP DEFAULT current_timestamp
);
And then just insert ACCOUNT_ID and PREDICTED_PROBABILITY:
insert_query = f'INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY) VALUES ({accountId}, {risk_score});'
ctx.cursor().execute(insert_query)
It will automatically assign the insert time to TIME_PREDICTED
Educated guess. When performing insert with:
insert_query = f'INSERT INTO ...(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED)
VALUES ({accountId}, {risk_score},{ct});'
It is a string interpolation. The ct is provided as string representation of datetime, which does not match a timestamp data type, thus error.
I would suggest using proper variable binding instead:
ctx.cursor().execute("INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES "
"(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) "
"VALUES(:1, :2, :3)",
(accountId,
risk_score,
("TIMESTAMP_LTZ", ct)
)
);
Avoid SQL Injection Attacks
Avoid binding data using Python’s formatting function because you risk SQL injection. For example:
# Binding data (UNSAFE EXAMPLE)
con.cursor().execute(
"INSERT INTO testtable(col1, col2) "
"VALUES({col1}, '{col2}')".format(
col1=789,
col2='test string3')
)
Instead, store the values in variables, check those values (for example, by looking for suspicious semicolons inside strings), and then bind the parameters using qmark or numeric binding style.
You forgot to place the quotes before and after the {ct}. The code should be :
insert_query = "INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) VALUES ({accountId}, {risk_score},'{ct}');".format(accountId=accountId,risk_score=risk_score,ct=ct)
ctx.cursor().execute(insert_query)

CosmosDB Cassandra API - select in query throws exception

SELECT partition_int, clustering_int, value_string
FROM test_ks1.test WHERE partition_int = ? AND clustering_int IN ?
Prepared select query with in clause throws the following exception:
java.lang.ClassCastException: class com.datastax.oss.driver.internal.core.type.PrimitiveType cannot be cast to class com.datastax.oss.driver.api.core.type.ListType
(com.datastax.oss.driver.internal.core.type.PrimitiveType and com.datastax.oss.driver.api.core.type.ListType are in unnamed module of loader 'app')
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.inspectType(CachingCodecRegistry.java:343)
at com.datastax.oss.driver.internal.core.type.codec.registry.CachingCodecRegistry.codecFor(CachingCodecRegistry.java:256)
at com.datastax.oss.driver.internal.core.data.ValuesHelper.encodePreparedValues(ValuesHelper.java:112)
at com.datastax.oss.driver.internal.core.cql.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:159)
Using datastax oss driver - version 4.5.1 and cosmosDB.
The query works with cassadra as docker and works in cqlsh with CosmosDB.
Queries used:
CREATE TABLE IF NOT EXISTS test_ks1.test (partition_int int, value_string text, clustering_int int, PRIMARY KEY ((partition_int),clustering_int))
Prepare the statement: INSERT INTO test_ks1.test (partition_int,clustering_int,value_string) values (?,?,?)
Insert values: 1,1,”a” | 1,2,”b”
Prepare the statement: SELECT partition_int, clustering_int, value_string FROM test_ks1.test WHERE partition_int = ? AND clustering_int IN ?
Execute with parameters 1,List.of(1,2)
The expected parameter is an integer and not list of integers
Sample code of the select prepared statement:
final CqlSessionBuilder sessionBuilder = CqlSession.builder()
.withConfigLoader(loadConfig(sessionConfig));
CqlSession session = sessionBuilder.build();
PreparedStatement statement = session.prepare(
"SELECT partition_int, clustering_int,"
+ "value_string FROM test_ks1.test WHERE partition_int = ? "
+ "AND clustering_int IN ?");
com.datastax.oss.driver.api.core.cql.ResultSet rs =
session.execute(statement.bind(1,List.of(1, 2)));
Is there a workaround to use prepared select queries with in clause?
Thanks.
My suggestion would be to use an work around like this:
//IDS you want to use;
List<Integer> list = Arrays.asList(1, 2);
// Multiply the number of question marks
String markers = StringUtils.repeat("?,", list.size()-1);
//Final query
final String query = "SELECT * FROM test_ks1.test where clustering_int in ("+markers+" ?)";
PreparedStatement prepared = session.prepare(query);
BoundStatement bound = prepared.bind(list.toArray()).setIdempotent(true);
List<Row> rows = session.execute(bound).all();
I have tried on my end and it works with me.
Now on your case you also have another parameter before the IN that you need to include in the parameter list but only after build the markers placeholder.

python oracle where clause containing date greater than comparison

I am trying to use cx_Oracle to query a table in oracle DB (version 11.2) and get rows with values in a column between a datetime range.
I have tried the following approaches:
Tried between clause as described here, but cursor gets 0 rows
parameters = (startDateTime, endDateTime)
query = "select * from employee where joining_date between :1 and :2"
cur = con.cursor()
cur.execute(query, parameters)
Tried the TO_DATE() function and Date'' qualifiers. Still no result for Between or >= operator. Noteworthy is that < operator works. I also got the same query and tried in a sql client, and the query returns results. Code:
#returns no rows:
query = "select * from employee where joining_date >= TO_DATE('" + startDateTime.strftime("%Y-%m-%d") + "','yyyy-mm-dd')"
cur = con.cursor()
cur.execute(query)
#tried following just to ensure that some query runs fine, it returns results:
query = query.replace(">=", "<")
cur.execute(query)
Any pointers about why the between and >= operators are failing for me? (my second approach was in line with the answer in Oracle date comparison in where clause but still doesn't work for me)
I am using python 3.4.3 and used cx_Oracle 5.3 and 5.2 with oracle client 11g on windows 7 machine
Assume that your employee table contains the field emp_id and the row with emp_id=1234567 should be retrieved by your query.
Make two copies of your a program that execute the following queries
query = "select to_char(:1,'YYYY-MM-DD HH24:MI:SS')||' >= '||to_char(joining_date,'YYYY-MM-DD HH24:MI:SS')||' >= '||to_char(:2,'YYYY-MM-DD HH24:MI:SS') resultstring from employee where emp_id=1234567"
and
query="select to_char(joining_date,'YYYY-MM-DD HH24:MI:SS')||' >= '||to_char(TO_DATE('" + startDateTime.strftime("%Y-%m-%d") + "','yyyy-mm-dd'),'YYYY-MM-DD HH24:MI:SS') resultstring from employee where emp_id=1234567"
Show us the code and the value of the column resultstring
You are constructing SQL queries as strings when you should be using parameterized queries. You can't use parameterization to substitute the comparison operators, but you should use it for the dates.
Also, note that the referenced answer uses the PostgreSQL parameterisation format, whereas Oracle requires you to use the ":name" format.

Cassandra QueryBuilder not returning any result, whereas same query works fine in CQL shell

SELECT count(*) FROM device_stats
WHERE orgid = 'XYZ'
AND regionid = 'NY'
AND campusid = 'C1'
AND buildingid = 'C1'
AND floorid = '2'
AND year = 2017;
The above CQL query returns correct result - 32032, in CQL Shell
But when I run the same query using QueryBuilder Java API , I see the count as 0
BuiltStatement summaryQuery = QueryBuilder.select()
.countAll()
.from("device_stats")
.where(eq("orgid", "XYZ"))
.and(eq("regionid", "NY"))
.and(eq("campusid", "C1"))
.and(eq("buildingid", "C1"))
.and(eq("floorid", "2"))
.and(eq("year", "2017"));
try {
ResultSetFuture tagSummaryResults = session.executeAsync(tagSummaryQuery);
tagSummaryResults.getUninterruptibly().all().stream().forEach(result -> {
System.out.println(" totalCount > "+result.getLong(0));
});
I have only 20 partitions and 32032 rows per partition.
What could be the reason QueryBuilder not executing the query correctly ?
Schema :
CREATE TABLE device_stats (
orgid text,
regionid text,
campusid text,
buildingid text,
floorid text,
year int,
endofwindow timestamp,
categoryid timeuuid,
devicestats map<text,bigint>,
PRIMARY KEY ((orgid, regionid, campusid, buildingid, floorid,year),endofwindow,categoryid)
) WITH CLUSTERING ORDER BY (endofwindow DESC,categoryid ASC);
// Using the keys function to index the map keys
CREATE INDEX ON device_stats (keys(devicestats));
I am using cassandra 3.10 and com.datastax.cassandra:cassandra-driver-core:3.1.4
Moving my comment to an answer since that seems to solve the original problem:
Changing .and(eq("year", "2017")) to .and(eq("year", 2017)) solves the issue since year is an int and not a text.

Having trouble querying by dates using the Java Cassandra Spark SQL Connector

I'm trying to use Spark SQL to query a table by a date range. For example, I'm trying to run an SQL statement like: SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate <= '2015-12-31' AND deployment_id = 1 AND device_id = 1. When I run the query no error is being thrown but I'm not receiving any results back when I would expect some. When running the query without the date range I am getting results back.
SparkConf sparkConf = new SparkConf().setMaster("local").setAppName("SparkTest")
.set("spark.executor.memory", "1g")
.set("spark.cassandra.connection.host", "localhost")
.set("spark.cassandra.connection.native.port", "9042")
.set("spark.cassandra.connection.rpc.port", "9160");
JavaSparkContext context = new JavaSparkContext(sparkConf);
JavaCassandraSQLContext sqlContext = new JavaCassandraSQLContext(context);
sqlContext.sqlContext().setKeyspace("mykeyspace");
String sql = "SELECT * FROM trip WHERE utc_startdate >= '2015-01-01' AND utc_startdate < '2015-12-31' AND deployment_id = 1 AND device_id = 1";
JavaSchemaRDD rdd = sqlContext.sql(sql);
List<Row> rows = rdd.collect(); // rows.size() is zero when I would expect it to contain numerous rows.
Schema:
CREATE TABLE trip (
device_id bigint,
deployment_id bigint,
utc_startdate timestamp,
other columns....
PRIMARY KEY ((device_id, deployment_id), utc_startdate)
) WITH CLUSTERING ORDER BY (utc_startdate ASC);
Any help would be greatly appreciated.
What does your table schema (in particular, your PRIMARY KEY definition) look like? Even without seeing it, I am fairly certain that you are seeing this behavior because you are not qualifying your query with a partition key. Using the ALLOW FILTERING directive will filter the rows by date (assuming that is your clustering key), but that is not a good solution for a big cluster or large dataset.
Let's say that you are querying users in a certain geographic region. If you used region as a partition key, you could run this query, and it would work:
SELECT * FROM users
WHERE region='California'
AND date >= '2015-01-01' AND date <= '2015-12-31';
Give Patrick McFadin's article on Getting Started with Timeseries Data a read. That has some good examples that should help you.

Resources