I've some CosmosDB documents like the following
{
"ProductId": 1,
"Status": true,
"Code": "123456",
"IsRecall": false,
"ScanLog": [
{
"Location": {
"type": "Point",
"coordinates": [
13.5957758,
42.7111538
]
},
"TimeStamp": 201602160957190600,
"ScanType": 0,
"UserId": "1004"
},
{
"Location": {
"type": "Point",
"coordinates": [
13.5957907,
42.7111359
]
},
"TimeStamp": 201602161246336640,
"ScanType": 0,
"UserId": "1004"
}
]
}
How can I order the query results by the TimeStamp property? I've tried using this query
SELECT c.Code, b.TimeStamp FROM c JOIN b IN c.ScanLog ORDER BY b.TimeStamp
but I receive this error
Order-by over correlated collections is not supported.
What is the correct way to do this?
JOINs with ORDER BY are currently not supported.
However, here is a user defined function (UDF) that will do the trick:
function sortScanLog (scanLog) {
function compareTimeStamps(a, b) {
return a.TimeStamp - b.TimeStamp;
}
return scanLog.sort(compareTimeStamps);
}
You use with a query like this:
SELECT c.ProductId, udf.sortScanLog(c.ScanLog) as ScanLog FROM c
If you want the opposite sort order, simply swap the a and b. So, the signature of the compareTimeStamps inner function would be:
function compareTimeStamps(b, a)
Alternatively, you can sort client-side after the results are returned.
Right now, ORDER BY clauses mixed with JOINs are not supported, the engine can look at indexed properties for JOIN operations but cannot re-order results based on the JOIN result.
You'd have to go with something like Larry offered or do the JOIN on the Query and the Sort by your own code once the results arrive, if you use C#, you can sort them with Linq for example.
Related
i have a Object-Array1 with some Attributes that are Object-Array2. I want to filter my Object-Array1 only to these elements, that contain a special value in Object-Array2. How wo i do this? Example:
{
"value": [
{
"title": "aaa",
"ID": 1,
"Responsible": [
{
"EMail": "abc#def.de",
"Id": 1756,
},
{
"EMail: "xyz#xyz.com",
"Id": 289,
}
]
},
{
"title": "bbbb",
"ID": 2,
"Responsible": [
{
"EMail": "tzu#iop.de",
"Id": 1756,
}
]
}
]
}
I want to filter my Object-Array1 (with title & id) only to these elements, that contain abc#def.de
How do i do this in Power Automate with the "Filter Array" Object? I tried this way, but didn't work:
Firstly, you haven't entered an expression, you've entered text. That will never work.
Secondly, even if you did set that as an expression, I don't think you'll be able to make it work over an array, at least, not without specifying more properties and making it a little more complex.
I think the easiest way is to use a contains statement after turning the item into a string ...
The expression I am using on the left hand side is ...
string(item()?['Responsible'])
... and this is the result ...
I'm not sure how to query when using CosmosDb as I'm used to SQL. My question is about how to get the maximum value of a property in an array of arrays. I've been trying subqueries so far but apparently I don't understand very well how they work.
In an structure such as the one below, how do I query the city with more population among all states using the Data Explorer in Azure:
{
"id": 1,
"states": [
{
"name": "New York",
"cities": [
{
"name": "New York",
"population": 8500000
},
{
"name": "Hempstead",
"population": 750000
},
{
"name": "Brookhaven",
"population": 500000
}
]
},
{
"name": "California",
"cities":[
{
"name": "Los Angeles",
"population": 4000000
},
{
"name": "San Diego",
"population": 1400000
},
{
"name": "San Jose",
"population": 1000000
}
]
}
]
}
This is currently not possible as far as I know.
It would look a bit like this:
SELECT TOP 1 state.name as stateName, city.name as cityName, city.population FROM c
join state in c.states
join city in state.cities
--order by city.population desc <-- this does not work in this case
You could write a user defined function that will allow you to write the query you probably expect, similar to this: CosmosDB sort results by a value into an array
The result could look like:
SELECT c.name, udf.OnlyMaxPop(c.states) FROM c
function OnlyMaxPop(states){
function compareStates(stateA,stateB){
stateB.cities[0].poplulation - stateA.cities[0].population;
}
onlywithOneCity = states.map(s => {
maxpop = Math.max.apply(Math, s.cities.map(o => o.population));
return {
name: s.name,
cities: s.cities.filter(x => x.population === maxpop)
}
});
return onlywithOneCity.sort(compareStates)[0];
}
You would probably need to adapt the function to your exact query needs, but I am not certain what your desired result would look like.
I have a map function
function (doc) {
for(var n =0; n<doc.Observations.length; n++){
emit(doc.Scenario, doc.Observations[n].Label);
}
}
the above returns the following:
{"key":"Splunk","value":"Organized"},
{"key":"Splunk","value":"Organized"},
{"key":"Splunk","value":"Organized"},
{"key":"Splunk","value":"Generate"},
{"key":"Splunk","value":"Ingest"}
I"m looking to design a reduce function that will then return the counts of the above values, something akin to:
Organized: 3
Generate: 1
Ingest: 1
My map function has to filter on my Scenario field, hence why I have it as an emitted key in the map function.
I've tried using a number of the built in reduce functions, but I end up getting count of rows, or nothing at all as the functions available don't apply.
I just need to access the counts of each of the elements that appear in the values field. Also, the values present here are representative, there could 100s of different types of values found in the values field for what that's worth.
I really appreciate the help!
Here's sample input:
{
"_id": "dummyId",
"test": "test",
"Team": "Alpha",
"CreatedOnUtc": "2019-06-20T21:39:09.5940830Z",
"CreatedOnLocal": "2019-06-20T17:39:09.5940830-04:00",
"Participants": [
{
"Name": "A",
"Role": "Person"
}
],
"Observations": [
{
"Label": "Report",
},
{
"Label": "Ingest",
},
{
"Label": "Generate",
},
{
"Label": "Ingest",
}
]
}
You can set the map by "value" as your key and associate an increment to that key to make sure a count is maintained. And then you can print your map which should look as you are requesting for.
i have document like this:
{ "id": ....,
"Title": ""title,
"ZipCodes": [
{
"Code": "code01",
"Name": "Name01"
},
{
"Code": "code02",
"Name": "Name02"
},
{
"Code": "code03",
"Name": "Name03"
} ],
"_rid": .......,
"_self": .......,
"_etag": ......,
"_attachments": "attachments/",
"_ts": ......
i was used to command
select c.id, c.ZipCodes[ARRAY_LENGTH (c.ZipCodes) -1] as ZipCodes from c
But i got error, how can i query last element ZipCodes in cosmos DB.
You can use ARRAY_SLICE for this. When passed -1 it returns an array containing the last element of the original array. Then index into that with [0] to get the single element contained (i.e. the zip code itself.)
SELECT c.id,
ARRAY_SLICE(c.ZipCodes,-1)[0] AS LastZipCode
FROM c
There is no way using select you can query the subdocument , i think you should use the where condition as follows,
SELECT value udf.sortZipCode(c.ZipCodes)
from c where c.id=2 and c.Title='title'
However, here is a user defined function (UDF) that will do the trick:
function sortZipCode(ZipCode) {
function compareTimeStamps(a, b) {
return a.TimeStamp - b.TimeStamp; //implement your logic
}
return scanLog.sort(compareTimeStamps);
}
But i got error, how can i query last element ZipCodes in cosmos DB.
I agree with Sajeetharan mentioned that we could use the UDF to do that. And we could do that with UDF easily.
UDF code
function userDefinedFunction(zipcodes){
return zipcodes[zipcodes.length-1];
}
SQL query:
SELECT c.id,c.Title,udf.GetLastRecord(c.ZipCodes) as ZipCodes FROM c
Test Result:
I have the web-form builder for science events. The event moderator creates registration form with arbitrary amount of boolean, integer, enum and text fields.
Created form is used for:
register a new member to event;
search through registered members.
What is the best search tool for second task (to search memebers of event)? Is ElasticSearch well for this task?
I wrote a post about how to index arbitrary data into Elasticsearch and then to search it by specific fields and values. All this, without blowing up your index mapping.
The post is here: http://smnh.me/indexing-and-searching-arbitrary-json-data-using-elasticsearch/
In short, you will need to do the following steps to get what you want:
Create a special index described in the post.
Flatten the data you want to index using the flattenData function:
https://gist.github.com/smnh/30f96028511e1440b7b02ea559858af4.
Create a document with the original and flattened data and index it into Elasticsearch:
{
"data": { ... },
"flatData": [ ... ]
}
Optional: use Elasticsearch aggregations to find which fields and types have been indexed.
Execute queries on the flatData object to find what you need.
Example
Basing on your original question, let's assume that the first event moderator created a form with following fields to register members for the science event:
name string
age long
sex long - 0 for male, 1 for female
In addition to this data, the related event probably has some sort of id, let's call it eventId. So the final document could look like this:
{
"eventId": "2T73ZT1R463DJNWE36IA8FEN",
"name": "Bob",
"age": 22,
"sex": 0
}
Now, before we index this document, we will flatten it using the flattenData function:
flattenData(document);
This will produce the following array:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "2T73ZT1R463DJNWE36IA8FEN"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Bob"
},
{
"key": "age",
"type": "long",
"key_type": "age.long",
"value_long": 22
},
{
"key": "sex",
"type": "long",
"key_type": "sex.long",
"value_long": 0
}
]
Then we will wrap this data in a document as I've showed before and index it.
Then, the second event moderator, creates another form having a new field, field with same name and type, and also a field with same name but with different type:
name string
city string
sex string - "male" or "female"
This event moderator decided that instead of having 0 and 1 for male and female, his form will allow choosing between two strings - "male" and "female".
Let's try to flatten the data submitted by this form:
flattenData({
"eventId": "F1BU9GGK5IX3ZWOLGCE3I5ML",
"name": "Alice",
"city": "New York",
"sex": "female"
});
This will produce the following data:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "F1BU9GGK5IX3ZWOLGCE3I5ML"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Alice"
},
{
"key": "city",
"type": "string",
"key_type": "city.string",
"value_string": "New York"
},
{
"key": "sex",
"type": "string",
"key_type": "sex.string",
"value_string": "female"
}
]
Then, after wrapping the flattened data in a document and indexing it into Elasticsearch we can execute complicated queries.
For example, to find members named "Bob" registered for the event with ID 2T73ZT1R463DJNWE36IA8FEN we can execute the following query:
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "eventId"}},
{"match": {"flatData.value_string.keyword": "2T73ZT1R463DJNWE36IA8FEN"}}
]
}
}
}
},
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "name"}},
{"match": {"flatData.value_string": "bob"}}
]
}
}
}
}
]
}
}
}
ElasticSearch automatically detects the field content in order to index it correctly, even if the mapping hasn't been defined previously. So, yes : ElasticSearch suits well these cases.
However, you may want to fine tune this behavior, or maybe the default mapping applied by ElasticSearch doesn't correspond to what you need : in this case, take a look at the default mapping or, for even further control, the dynamic templates feature.
If you let your end users decide the keys you store things in, you'll have an ever-growing mapping and cluster state, which is problematic.
This case and a suggested solution is covered in this article on common problems with Elasticsearch.
Essentially, you want to have everything that can possibly be user-defined as a value. Using nested documents, you can have a key-field and differently mapped value fields to achieve pretty much the same.