I have one table TestTable and partition Key TestColumn.
Inputs Dates:
from_date= "2017-04-20T16:31:54.451071+00:00"
to_date = "2018-04-20T16:31:54.451071+00:00"
when I use equal query the date then it is working.
key_expr = Key('TestColumn').eq(to_date)
query_resp = table.query(KeyConditionExpression=key_expr)
but when I use between query then is not working.
key_expr = Key('TestColumn').between(from_date, to_date)
query_resp = table.query(KeyConditionExpression=key_expr)
Error:
Unknown err_msg while querying dynamodb: An error occurred (ValidationException) when calling the Query operation: Query key condition not supported
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
DynamoDB Query will return data from one and only one partition, meaning you have to supply a single partition key in the request.
KeyConditionExpression
The condition that specifies the key value(s)
for items to be retrieved by the Query action.
The condition must perform an equality test on a single partition key
value.
You can optionally use a BETWEEN operator on a sort key (but you still have to supply a single partition key).
If you use a Scan you can use an ExpressionFilter and use the BETWEEN operator on TestColumn
Related
I have an empty table defined in snowflake as;
CREATE OR REPLACE TABLE db1.schema1.table(
ACCOUNT_ID NUMBER NOT NULL PRIMARY KEY,
PREDICTED_PROBABILITY FLOAT,
TIME_PREDICTED TIMESTAMP
);
And it creates the correct table, which has been checked using desc command in sql. Then using a snowflake python connector we are trying to execute following query;
insert_query = f'INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) VALUES ({accountId}, {risk_score},{ct});'
ctx.cursor().execute(insert_query)
Just before this query the variables are defined, The main challenge is getting the current time stamp written into snowflake. Here the value of ct is defined as;
import datetime
ct = datetime.datetime.now()
print(ct)
2021-04-30 21:54:41.676406
But when we try to execute this INSERT query we get the following errr message;
ProgrammingError: 001003 (42000): SQL compilation error:
syntax error line 1 at position 157 unexpected '21'.
Can I kindly get some help on ow to format the date time value here? Help is appreciated.
In addition to the answer #Lukasz provided you could also think about defining the current_timestamp() as default for the TIME_PREDICTED column:
CREATE OR REPLACE TABLE db1.schema1.table(
ACCOUNT_ID NUMBER NOT NULL PRIMARY KEY,
PREDICTED_PROBABILITY FLOAT,
TIME_PREDICTED TIMESTAMP DEFAULT current_timestamp
);
And then just insert ACCOUNT_ID and PREDICTED_PROBABILITY:
insert_query = f'INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY) VALUES ({accountId}, {risk_score});'
ctx.cursor().execute(insert_query)
It will automatically assign the insert time to TIME_PREDICTED
Educated guess. When performing insert with:
insert_query = f'INSERT INTO ...(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED)
VALUES ({accountId}, {risk_score},{ct});'
It is a string interpolation. The ct is provided as string representation of datetime, which does not match a timestamp data type, thus error.
I would suggest using proper variable binding instead:
ctx.cursor().execute("INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES "
"(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) "
"VALUES(:1, :2, :3)",
(accountId,
risk_score,
("TIMESTAMP_LTZ", ct)
)
);
Avoid SQL Injection Attacks
Avoid binding data using Python’s formatting function because you risk SQL injection. For example:
# Binding data (UNSAFE EXAMPLE)
con.cursor().execute(
"INSERT INTO testtable(col1, col2) "
"VALUES({col1}, '{col2}')".format(
col1=789,
col2='test string3')
)
Instead, store the values in variables, check those values (for example, by looking for suspicious semicolons inside strings), and then bind the parameters using qmark or numeric binding style.
You forgot to place the quotes before and after the {ct}. The code should be :
insert_query = "INSERT INTO DATA_LAKE.CUSTOMER.ACT_PREDICTED_PROBABILITIES(ACCOUNT_ID, PREDICTED_PROBABILITY, TIME_PREDICTED) VALUES ({accountId}, {risk_score},'{ct}');".format(accountId=accountId,risk_score=risk_score,ct=ct)
ctx.cursor().execute(insert_query)
I am shifting my database from mongodb to dynamo db. I have a problem with delete function from a table where labName is partition key and serialNumber is my sort key and there is one Id as feedId I want to delete all the records from the table where labName is given and feedId is NOT IN (array of ids).
I am doing it in mongo like below mentioned code
Is there a way with BatchWriteItem where i can add condition for feedId without sort key.
let dbHandle = await getMongoDbHandle(dbName);
let query = {
feedid: {$nin: feedObjectIds}
}
let output = await dbModule.removePromisify(dbHandle,
dbModule.collectionNames.feeds, query);
While working with DynamoDB, you can perform Conditional Retrieval (GET) / Deletion (DELETE) on the records only & only if you have provided all of the attributes for the Primary Key. For example:
For a Simple Primary key, you only need to provide a value for the Partition key.
For a Composite Primary Key, you must need to provide values for both the Partition key & sort key.
I am new to Presto, and can't quite figure out how to check if a key is present in a map. When I run a SELECT query, this error message is returned:
Key not present in map: element
SELECT value_map['element'] FROM
mytable
WHERE name = 'foobar'
Adding AND contains(value_map, 'element') does not work
The data type is a string array
SELECT typeof('value_map') FROM mytable
returns varchar(9)
How would I only select records where 'element' is present in the value_map?
You can lookup a value in a map if the key is present with element_at, like this:
SELECT element_at(value_map, 'element')
FROM ...
WHERE element_at(value_map, 'element') IS NOT NULL
element_at is ambiguous in that case -- it'll return NULL when either there's no such key or the key does exist and has NULL associated with it. A guaranteed approach is contains(map_keys(my_map), 'mykey'), which admittedly should be a bit slower than the original variant.
I am trying to query Dynamo for modified_time > 1 day and jstatus = Error
Here, JobStatusIndex is the Global Secondary Index having, modified_time as the Partition Key and jstatus as the sort key.
dynamo.query({
TableName: "Jobs",
IndexName: "JobStatusIndex", // Global Secondry Index
KeyConditionExpression: `modified_time >= :ter and jstatus = :ste`,
ExpressionAttributeValues: {
':ter': moment().subtract(1, 'day').unix(),
':ste': "Error"
}
},(err) => console.log(err))
But I get an error that says:
ValidationException: Query key condition not supported
What could be the reason for this? I just don't get this.
I have gone through some SO questions, but it didn't solve the problem. I already have the required keys in place. What am I missing?
You can only use an equals operator on your partition key (modified_time). And to do a query, you must specify a single partition key.
You have a lot of different options depending on what you are trying to achieve. Most likely is you have your partition key and sort key the wrong way around for your GSI. If you make jstatus your partition key and modified_time your sort key you can do the query.
Alternatively you could consider doing a scan, which does not require you to specify any index or keys.
In my cassandra table i have a collection of Map also i have indexed the map keys.
CREATE TABLE IF NOT EXISTS test.collection_test(
name text,
year text,
attributeMap map<text,text>,
PRIMARY KEY ((name, year))
);
CREATE INDEX ON collection_test (attributeMap);
The QueryBuilder syntax is as below:
select().all().from("test", "collection_test")
.where(eq("name", name)).and(eq("year", year));
How should i put where condition on attributeMap?
First of all, you will need to create an index on the keys in your map. By default, an index created on a map indexes the values of the map, not the keys. There is special syntax to index the keys:
CREATE INDEX attributeKeyIndex ON collection_test (KEYS(attributeMap));
Next, to SELECT from a map with indexed keys, you'll need the CONTAINS KEY keyword. But currently, there is not a definition for this functionality in the query builder API. However, there is an open ticket to support it: JAVA-677
Currently, to accomplish this with the Java Driver, you'll need to build your own query or use a prepared statement:
PreparedStatement statement = _session.prepare("SELECT * " +
"FROM test.collection_test " +
"WHERE attributeMap CONTAINS KEY ?");
BoundStatement boundStatement = statement.bind(yourKeyValue);
ResultSet results = _session.execute(boundStatement);
Finally, you should read through the DataStax doc on When To Use An Index. Secondary indexes are known to not perform well. I can't imagine that a secondary index on a collection would be any different.