query with multiple values for same attribute in dynamodb nodejs - node.js

Is there any way to query a dynamodb table with multiple values for a single attribute?
TableName: "sdfdsgfdg"
IndexName: 'username-category-index',
KeyConditions: {
"username": {
"AttributeValueList": { "S": "aaaaaaa#gmail.com" }
,
"ComparisonOperator": "EQ"
},
"username": {
"AttributeValueList": { "S": "hhhhh#gmail.com" }
,
"ComparisonOperator": "EQ"
},
"category": {
"AttributeValueList": { "S": "Coupon" }
,
"ComparisonOperator": "EQ"
}
}

BachGetItem API can be used to get multiple items from DynamoDB table. However, it can't be used in your use case as you are getting the data from index.
The BatchGetItem operation returns the attributes of one or more items
from one or more tables. You identify requested items by primary key.
In API perspective, there is no other solution. You may need to look at data modelling perspective and design the table/index to satisfy your Query Access Pattern (QAP).
Also, please note that querying the index multiple times with partition key values (i.e. some small number) wouldn't impact the performance as long as it is handful of items.

Related

Cosmos DB SQL Query how to count sub properties?

I have these kind of json documents in a CosmosDB database.
{
"Version": 0,
"Entity": {
"ID": "xxxxxxx",
"EventHistory": {
"2020-04-28T16:30:35.6887561Z": "NEW",
"2020-04-28T16:35:21.1811993Z": "PROCESSED"
},
"SourceSystem": "xxxx",
"SourceSystemIdentifier": "xxxx",
"PCC": "xxx",
"StorageReference": "xxxxxxxxxxxx",
"SupplementaryData": {
"eTicketCount": "2"
}
}
}
The number of sub-properties within the EventHistory node is dynamic. In the example there are two but it can be any number.
I couldn't find a way to count how many sub-properties the node contains. At least, I need to query those whose have only one property declared.
FYI: I'm not able to change the format of the documents. I know that it would be more convenient to store them as an array.
I tried to use ARRAY_LENGTH or COUNT functions but since it's not an array, the formers couldn't be applied.

Cloudant Sorting on a nullable field

I want to sort on a field lets say name which is indexed in Cloudant DB. I am getting all the documents both which has this name field and which doesn't by using the index without sort . But when i try to sort with the name field I am not getting the documents which doesn't have this name field in the doc.
Is there any way to do this by using the query indexes. I want all the documents in sorted order which doesn't have the name field too.
For Example :
Below are some documents:
{
"_id": 1234,
"classId": "abc",
"name": "Happa"
}
{
"_id": 12345,
"classId": "abc",
"name": "Prasanth"
}
{
"_id": 123456,
"classId": "abc",
}
Below is the Query what i am trying to execute:
{
"selector": {
"classId": "abc",
"name" :{
"or" : [
{"$exists": true},{"$exists": false}
]
}
},
"sort": [{ "classId": "asc" }, { "name": "asc" }],
"use_index": "idx-classId_name"
},
I am expecting all the documents to be returned in a sorted order including the document which doesn't have that name field.
Your query makes no sense to me as it stands. You're requesting a listing of documents which either have, or don't have a specific field (meaning every document), and expecting to sort those on this field that may or may not exist. Such an order isn't defined out of the box.
I'd remove the name clause from the selector, sorting only on the classId field which appear in every document, and then do the secondary partial ordering on the client side, so you can decide how you intend to mix in the documents without the name field with those that have it.
Another solution is to use a view instead of a Cloudant Query index. I've not tested this, but hopefully the intent is clear:
function(doc) {
if (doc && doc.classId) {
var name = doc.name || "[notfound]";
emit(doc.classId+"-"+name, 1);
}
}
which will key the docs on "classId-name" and for docs with no name, a specified sentinel value.
Querying the view should return the documents lexicographically ordered on this compound key (which you can reverse with a query parameter if you wish).

Apply a filter on array field of couchDB

I'm working on Hyperledger fabric. I need a particular value from array not a full document in CouchDB.
Example
{
"f_id": "1",
"History": [
{
"amount": "1",
"contactNo": "-",
"email": "i2#mail.com"
},
{
"amount": "5",
"contactNo": "-",
"email": "i#gmail.com",
}
],
"size": "12"
}
I want only an email :"i2#mail.com" Object on history array, not a full History array.
mango Query:
{
"selector": {
"History": {
"$elemMatch": {
"email": "i2#mail.com"
}
}
}
}
Output:
{
"f_id": "1",
"History": [
{
"amount": "1",
"contactNo": "-",
"email": "i2#mail.com"
},
{
"amount": "5",
"contactNo": "-",
"email": "i#gmail.com",
}
],
"size": "12"
}
Full History array But needs only the first object of history array.
Can anyone guide me?
Thanks.
I think it's not possible, because rich queries are for retrieving complete records (key-value pairs) according to given selector.
You may want to reconsider your design. For example if you want to hold an history and query from there, this approach may work out:
GetState of your special key my_record.
If key exists:
PutState new value with key my_record.
Enrich old value with additional attributes: {"DocType": "my_history", "time": "789546"}. With the help of these new attributes, it will be possible create indexes and search via querying.
PutState enriched old value with a new key my_record_<uniqueId>
If key doesn't exists, just put your value with key my_record without any new attributes.
With this approach my_record key will always hold latest value. You can query history with any attributes with/out pagination by using indexes (or not, based on your performance concerns).
This approach will also be less space consuming approach. Because if you accumulate history on single key, existing history will be copied to next version every time which means your every entry will consume previous_size + delta, instead of just delta.

How to reduce query execution time using mango query in CouchDB?

I am doing pagination of 15000 records using mango query in CouchDB, but as I skip the records in more numbers then the execution time is increasing.
Here is my query:
{
"selector": {
"name": {"$ne": "null"}
},
"fields": ["_id", "_rev", "name", "email" ],
"sort": [{"name": "asc" }],
"limit": 10,
"skip": '.$skip.'
}
Here skip documents are dynamic depends upon the pagination number and as soon as the skip number increases the query execution time also get increase.
CouchDB "Mango" queries that use the $ne (not equal) operator tend to suffer performance issues because of the way the indexing works. One solution is to create and index that *only contains documents where name does not equal null by using CouchDB's relative new partial index feature.
Partial indexes allow the database to be filtered at index time, so that the built index only contains documents that pass the filter test you specify. The index can then be used with a query at query time to further winnow the data set down.
An index is created by calling the /db/_index endpoint:
POST /db/_index HTTP/1.1
Content-Type: application/json
Content-Length: 144
Host: localhost:5984
{
"index": {
"partial_filter_selector": {
"name": {
"$ne": "null"
}
},
"fields": ["_id", "_rev", "name", "email"]
},
"ddoc": "mypartialindex",
"type" : "json"
}
This creates an index where only documents whose name is not null are included. We can then specify this index at query time:
{
"selector": {
"name": {
"$ne": "null"
}
},
"use_index": "mypartialindex"
}
In the above query, my selector is choosing all records, but the index it is accessing is already filtered. You may add additional clauses to the selector here to further filter the data at query time.
Partial indexing is described in the CouchDB documentation here and in this blog post.

How to create complex structure in Cassandra with CQL3

I have problem with presenting complex data structure in cassandra.
JSON example of data :
{
"A": {
"A_ID" : "1111"
"field1": "value1",
"field2": "value2",
"field3": [
{
"id": "id1",
"name": "name1",
"segment": [
{
"segment_id": "segment_id_1",
"segment_name": "segment_name_1",
"segment_value": "segment_value_1"
},
{
"segment_id": "segment_id_2",
"segment_name": "segment_name_2",
"segment_value": "segment_value_2"
},
...
]
},
{
"id": "id2",
"name": "name2",
"segment": [
{
"segment_id": "segment_id_3",
"segment_name": "segment_name_3",
"segment_value": "segment_value_3"
},
{
"segment_id": "segment_id_4",
"segment_name": "segment_name_4",
"segment_value": "segment_value_4"
},
...
]
},
...
]
}
}
Will be used only one query:
Find by A_ID.
I think this data should store in one TABLE (Column Family) and without serialization/deserialization operations for more efficiency.
How can I do this if CQL does not support nested maps and lists?
Cassandra 2.1 adds support for nested structures: https://issues.apache.org/jira/browse/CASSANDRA-5590
The downside to "just store it as a json/protobuf/avro/etc blob" is that you have to read-and-rewrite the entire blob to update any field. So at the very least you should pull your top level fields into Cassandra columns, leveraging collections as appropriate.
As you will be using it just as a key/value, you could actually store it either as JSON, or for saving data more efficiently, something like BSON or event Protobuf.
I personally would store it in the Protobuf record, as it doesn't save the field names which may be repeating in your case.

Resources