Case insensitive search in arrays for CosmosDB / DocumentDB - azure

Lets say I have these documents in my CosmosDB. (DocumentDB API, .NET SDK)
{
// partition key of the collection
"userId" : "0000-0000-0000-0000",
"emailAddresses": [
"someaddress#somedomain.com", "Another.Address#someotherdomain.com"
]
// some more fields
}
I now need to find out if I have a document for a given email address. However, I need the query to be case insensitive.
There are ways to search case insensitive on a field (they do a full scan however):
How to do a Case Insensitive search on Azure DocumentDb?
select * from json j where LOWER(j.name) = 'timbaktu'
e => e.Id.ToLower() == key.ToLower()
These do not work for arrays. Is there an alternative way? A user defined function looks like it could help.
I am mainly looking for a temporary low-effort solution to support the scenario (I have multiple collections like this). I probably need to switch to a data structure like this at some point:
{
"userId" : "0000-0000-0000-0000",
// Option A
"emailAddresses": [
{
"displayName": "someaddress#somedomain.com",
"normalizedName" : "someaddress#somedomain.com"
},
{
"displayName": "Another.Address#someotherdomain.com",
"normalizedName" : "another.address#someotherdomain.com"
}
],
// Option B
"emailAddressesNormalized": {
"someaddress#somedomain.com", "another.address#someotherdomain.com"
}
}
Unfortunately, my production database already contains documents that would need to be updated to support the new structure.
My production collections contain only 100s of these items, so I am even tempted to just get all items and do the comparison in memory on the client.

If performance matters then you should consider one of the normalization solution you have proposed yourself in question. Then you could index the normalized field and get results without doing a full scan.
If for some reason you really don't want to retouch the documents then perhaps the feature you are missing is simple join?
Example query which will do case-insensitive search from within array with a scan:
SELECT c FROM c
join email in c.emailAddresses
where lower(email) = lower('ANOTHER.ADDRESS#someotherdomain.com')
You can find more examples about joining from Getting started with SQL commands in Cosmos DB.
Note that where-criteria in given example cannot use an index, so consider using it only along another more selective (indexed) criteria.

Related

Indexing arrays in CosmosDB

Why doesn't CosmosDB index arrays by default? The default index path is
"path": "/*"
Doesn't that mean "index everything"? Not "index everything except arrays".
If I add my array field to the index with something like this:
"path": "/tags/[]/?"
It will work and start indexing that particular array field.
But my question is why doesn't "index everything" index everything?
EDIT: Here's a blog post that describes the behavior I'm seeing. http://www.devwithadam.com/2017/08/querying-for-items-in-array-in-cosmosdb.html Array_Contains queries are very slow, clearly not using the index. If you add the field in question to the index explicitly then the queries are fast (clearly they start using the index).
"New" index layout
As stated in Index Types
Azure Cosmos containers support a new index layout that no longer uses
the Hash index kind. If you specify a Hash index kind on the indexing
policy, the CRUD requests on the container will silently ignore the
index kind and the response from the container only contains the Range
index kind. All new Cosmos containers use the new index layout by
default.
The below issue does not apply to the new index layout. There the default indexing policy works fine (and delivers the results in 36.55 RUs). However pre-existing collections may still be using the old layout.
"Old" index layout
I was able to reproduce the issue with ARRAY_CONTAINS that you are asking about.
Setting up a CosmosDB collection with 100,000 posts from the SO data dump (e.g. this question would be represented as below)
{
"id": "50614926",
"title": "Indexing arrays in CosmosDB",
/*Other irrelevant properties omitted */
"tags": [
"azure",
"azure-cosmosdb"
]
}
And then performing the following query
SELECT COUNT(1)
FROM t IN c.tags
WHERE t = 'sql-server'
The query took over 2,000 RUs with default indexing policy and 93 with the following addition (as shown in your linked article)
{
"path": "/tags/[]/?",
"indexes": [
{
"kind": "Hash",
"dataType": "String",
"precision": -1
}
]
}
However what you are seeing here is not that the array values aren't being indexed by default. It is just that the default range index is not useful for your query.
The range index uses keys based on partial forward paths. So will contain paths such as the following.
tags/0/azure
tags/0/c#
tags/0/oracle
tags/0/sql-server
tags/1/azure-cosmosdb
tags/1/c#
tags/1/sql-server
With this index structure it starts at tags/0/sql-server and then reads all of the remaining tags/0/ entries and the entirety of the entries for tags/n/ where n is an integer greater than 0. Each distinct document mapping to any of these needs to be retrieved and evaluated.
By contrast the hash index uses reverse paths (more details - PDF)
StackOverflow theoretically allows a maximum of 5 tags per question to be added by the UI so in this case (ignoring the fact that a few questions have more tags through site admin activities) the reverse paths of interest are
sql-server/0/tags
sql-server/1/tags
sql-server/2/tags
sql-server/3/tags
sql-server/4/tags
With the reverse path structure finding all paths with leaf nodes of value sql-server is straight forward.
In this specific use case as the arrays are bounded to a maximum of 5 possible values it is also possible to use the original range index efficiently by looking at just those specific paths.
The following query took 97 RUs with default indexing policy in my test collection.
SELECT COUNT(1)
FROM c
WHERE 'sql-server' IN (c.tags[0], c.tags[1], c.tags[2], c.tags[3], c.tags[4])
Cosmos DB does indexes all the element of an Array. By, default, All Azure Cosmos DB data is indexed. Read more here https://learn.microsoft.com/en-us/azure/cosmos-db/indexing-policies

flask dyanmo query for count fileds

consider this schema in dyanmo db,we count of question
[
{
'TableName': "user_detail",
'KeySchema': [
{'AttributeName': "timestamp", 'KeyType': "HASH"},
{'AttributeName': "question", 'KeyType': "RANGE"},
],
'AttributeDefinitions': [
{'AttributeName': "timestamp", 'AttributeType': "S"},
{'AttributeName': "question", 'AttributeType': "N"},
],
'ProvisionedThroughput': {
'ReadCapacityUnits': 40,
'WriteCapacityUnits': 40] }
}
]
I'm beginner of dyanmo db can any one give idea for that one.we need query,the sql query goes like that select count(question) from user_detail where question =1
Thanks in advance
I will throw some pointers. DynamoDB has two types of APIs :-
Option 1:-
1) Scan API - will scan the whole table. The scan api should be used when the hash key value is not known
2) Query API - will query the table using hash key. The hash key is must for Query API
In your case, the hash key value is not known. So, you can't use Query API. However, you can use scan API which is a very costly operation in terms of performance and cost. So, it should be avoided if you have a table of millions of items.
The alternative is to create global secondary index (GSI) with question attribute as hash key and some other field as sort key (possibly timestamp). This way, you should be able to use Query API on GSI. However, this wouldn't solve the problem completely.
DynamoDB doesn't have aggregate functions like count,min and max. So, you need to count the number of items in the result set at client side.
Option 2:-
If you have an option to change the data model, you can redesign the above table as mentioned below:-
question - hash key
timestamp - range key
I have seen many use cases using timestamp as range key. Please analyse your query access patterns (QAP) for all your use cases and make the decision accordingly.

Marklogic QueryByExample in collection NodeJS

TLDR
Is there a way to limit queryByExample to a collection in NodeJS?
Problem faced
I have a complex query with some optional fields (i.e. sometimes some search fields will be omitted). So I need to create a query dynamically, e.g. in JSON. QueryByExample seems to be the right tool to use here as it gives me that flexibility to pass a JSON. However my problem is that I would like to limit my search to only one collection or directory.
e.g. I was hoping for something like
searchJSON = {
title: { $word: "test" },
description: { $word: "desc" }
};
//query
db.documents.query(qb.where(
qb.collection("collectionName"),
qb.byExample(searchJSON)
)).result()...
In this case searchJSON could have been built dynamically, for example maybe sometimes title may be omitted from the search.
This doesn't work because the query builder only allows queryByExample to be the only query. But I'd instead like to built a dynamic search query which is limited to a collection or directory.
At present, I think you would have to express the query with QueryBuilder instead of Query By Example using
qb.and([
qb.collection('collectionName'),
qb.word('title', 'test'),
qb.word('description', 'desc')
])
See http://docs.marklogic.com/jsdoc/queryBuilder.html#word
That said, it should be possible for the Node.js API to relax that restriction based on the fixes in MarkLogic 9.0-2
Please file an issue on https://github.com/marklogic/node-client-api

Apigee Cassandra, how to query property with array of objects?

Suppose this is my data:
{
"info": [
{
"name": "Dr. DRE"
}
]
}
How do I query in Cassandra all rows that have "Dr. DRE" as name?
select * where ..?
I have no idea, and didn't find anything useful on google...
Since we clarified that you're using usergrid and not a native Cassandra instance, this should work for you:
where info.name='Dr. DRE'
(Note that in Usergrid 1.0, select * is implied, but you can include it if you like)
While the use of the = in this case may seem a little strange, you can use that to look for values in an array.
In Cassandra 2.1 and above you can index the collection and query the collection using a contains in the where clause. Read more here and here

Referencing external doc in CouchDB view

I am scraping an 90K record database using JSON-RPC and I am trying to put in some basic error checking. I want to start by scraping the database twice using two different settings and adding a prefix to the second scrape. This way I can check to ensure that the two settings are not producing different records (due to dropped updates, etc). I wanted to implement the comparison using a view which compares each document from the first scrape with it's twin produced by the second scrape and then emit the names of records with a difference between them.
However, I cannot quite figure out how to pull in another doc in the view, everything I have read only discusses external docs using the emit() function, which is too late to permit me to compare it. In the example below, the lookup() function would grab the referenced document.
Is this just not possible?
function(doc) {
if(doc._id.slice(0,1)!=='$' && doc._id.slice(0,1)!== "_"){
var otherDoc = lookup('$test" + doc._id);
if(otherDoc){
var keys = doc.value.keys();
var same = true;
keys.forEach(function(key) {
if ((key.slice(0,1) !== '_') && (key.slice(0,1) !=='$') && (key!=='expires')) {
if (!Object.equal(otherDoc[key], doc[key])) {
same = false;
}
}
});
if(!same){
emit(doc._id, 1);
}
}
}
}
Context
You are correct that this is not possible in CouchDB. The whole point of the map function is that it must be idempotent, otherwise you lose all the other nice benefits of a pre-calculated index.
This is why you cannot access external resources in the map function, whether they be other records or the clock. Any time you run a map you must always get the same result if you put the same record into it. Since there are no relationships between records in CouchDB, you cannot promise that this is possible.
Solution
However, you can still achieve your end goal, just be different means. Some possibilities...
Assuming there is some meaningful numeric value in each doc, you could use a view to take the sum of all those values and group them by which import you did ({key: <batch id>, value: <meaningful number>}). Then compare the two numbers in your client or the browser to see if they match.
A brute force approach would be to use a view to pair the docs that should match. Each doc is on a different row, but they're grouped by a common field. Then iterate through the entire index comparing the pairs. This would certainly be the quickest to code and doesn't depend on your application or data.
Implement a validation function to enforce a schema on your data. Just be warned that this will reduce your write throughput since each written record will be piped out of Erlang and into the JS engine. Also, this is only applicable if you're worried about properly formed records instead of their precise content, which might not be the case.
Instead of your different batch jobs creating different docs, have them place them into the same doc. The structure might look like this: { "_id": "something meaningful", "batch_one": { ..data.. }, "batch_two": { ..data.. } } Then your validation function could compare them or you could create a view that indexes all the docs that don't match. All depends on where in your pipeline you want to do the error checking and correction.
Personally I like the last option better, but only if you don't plan to use the database as is in production. Ie., you wouldn't want to carry around all that extra data in each record.
Hope that helps.
Cheers.

Resources