Querying nested array in cosmos db - azure

Having list of documents as below mentioned.
Document1:
{
id:1,
PostList:[
{
postname:"aaa",
lastdatetime:2017-07-13T17:10:25+05:30,
sname:"sas"
},
{
postname:"aaa1",
lastdatetime:2017-07-14T17:10:25+05:30,
sname:"sasadd"
},
{
postname:"aaa2",
lastdatetime:2017-07-10T17:10:25+05:30,
sname:"weq"
}
]
}
Document2:
{
id:2,
PostList:[
{
postname:"aaa",
lastdatetime:2017-07-13T17:10:25+05:30,
sname:"sas"
},
{
postname:"aaa1",
lastdatetime:2017-07-14T17:10:25+05:30,
sname:"sasadd"
},
{
postname:"aaa2",
lastdatetime:2017-07-10T17:10:25+05:30,
sname:"weq"
}
]
}
I need a list of postnames which is equal to "aaa" with orderby lastdatetime.
I am able to get query
select f.lastdatetime,f.postname
from c
join f in c.PostList
where f.postname='aaa'
But I need to get the list with orderby lastdatetime.
When I try the below query, I am getting an error
Order-by over correlated collections is not supported
select f.lastdatetime,f.postname
from c
join f in c.PostList
where f.postname='aaa' ORDER BY f.lastdatetime ASC
Does anybody have an idea to get through?

As #Rafat Sarosh said in the comment :Order-by over correlated collections is not supported and it will be enable in the future.
However, I suggest a workaround for you to track for your solution: use Azure Cosmos DB UDF.
You could pass the results of your query as parameters to the UDF for sorting processing.
Query Sql:
select f.lastdatetime,f.postname
from c
join f in c.PostList
where f.postname='aaa'
UDF sample code:
function userDefinedFunction(arr){
var i = arr.length, j;
var tempExchangVal;
while (i > 0) {
for (j = 0; j < i - 1; j++) {
if (arr[j].lastdatetime < arr[j + 1].lastdatetime) {
tempExchangVal = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = tempExchangVal;
}
}
i--;
}
return arr;
}
Hope it helps you.

Related

NodeJS: Finding a Key that has the most key array matches in a string

I am trying to achieve a small amount of javascript code that is able to locate a key that contains another key with the most array element occurrences in a string. It's a little hard to explain but I have given an example below. I have tried several filters, finds, and lengthy code loops with no luck. Anything would help, thanks :)
const object = {
keyone: {
tags: ["game","video","tv","playstation"]
},
keytwo: {
tags: ["book", "sport", "camping", "out"]
}
};
const string = "This is an example, out playstaion, tv and video games are cool!";
// I am trying to locate the key that contains the most tags in a string.
// In this case the result I am looking for would be "keytwo",
// because it's tags have greater occurances inside the string (playstaion, tv, video, game/s).
This should do it, though you might want to consider adding keyword stemming.
const object = {
keyone: {
tags: ["game", "video", "tv", "playstation"]
},
keytwo: {
tags: ["book", "sport", "camping", "out"]
}
};
const string = "This is an example, out playstaion, tv and video games are cool!";
result = {}
for (const [key, value] of Object.entries(object)) {
result[key] = value.tags.reduce((acc, item) => (acc += (string.match(item) || []).length), 0)
}
console.log(result)
Result:
{ keyone: 3, keytwo: 1 }
Edit:
How to count:
let result_key;
let result_count = 0;
for (const [key, value] of Object.entries(object)) {
const result = value.tags.reduce((acc, item) => (acc += (string.match(item) || []).length), 0);
if(result > result_count) {
result_count = result;
result_key = key;
}
}
console.log(result_key, result_count)
Result:
keyone 3

How to control the number of Oracle's request in Node.js?

I'm trying to do the following using Node.js and the 'oracledb' node:
Consult Database A to get all bills released on a specific date.
Assign the result to a variable list.
Use the function .map() on list, and inside this function consult Database B to get client's info by a common key, for each item of list.
The problem is: the Database B requests are done all together, so if there's 1000 bills to map, it returns only 100 and treats the rest as error. It is probably related to the number of requests at the same time.
So, given the details, I'd like to know if there's a way to divide the number of requests (e.g. 100 at the time), or any other solution.
ps.: i apologize in advance for my mistakes. I also apologize for not demonstrate on code.
Here's an example of how you can do this by leveraging the new executeMany in v2.2.0 (recently released) and global temporary tables to minimize round trips.
Given these objects:
-- Imagine this is the table you want to select from based on the common keys
create table t (
common_key number,
info varchar2(50)
);
-- Add 10,000 rows with keys 1-10,000 and random data for info
insert into t (common_key, info)
select rownum,
dbms_random.string('p', 50)
from dual
connect by rownum <= 10000;
commit;
-- Create a temp table
create global temporary table temp_t (
common_key number not null
)
on commit delete rows;
The following should work:
const oracledb = require('oracledb');
const config = require('./dbConfig.js');
const startKey = 1000;
const length = 2000;
// Uses a promise to simulate async work.
function getListFromDatabaseA() {
return new Promise((resolve) => {
const list = [];
const count = length - startKey;
for (let x = 0; x < count; x += 1) {
list.push(startKey + x);
}
resolve(list);
});
}
// The list returned from A likely isn't in the right format for executeMany.
function reformatAsBinds(list) {
const binds = [];
for (let x = 0; x < list.length; x += 1) {
binds.push({
key: list[x]
});
}
return binds;
}
async function runTest() {
let conn;
try {
const listFromA = await getListFromDatabaseA();
const binds = reformatAsBinds(listFromA);
conn = await oracledb.getConnection(config);
// Send the keys to the temp table with executeMany for a single round trip.
// The data in the temp table will only be visible to this session and will
// be deleted automatically at the end of the transaction.
await conn.executeMany('insert into temp_t (common_key) values (:key)', binds);
// Now get your common_key and info based on the common keys in the temp table.
let result = await conn.execute(
`select common_key, info
from t
where common_key in (
select common_key
from temp_t
)
order by common_key`
);
console.log('Got ' + result.rows.length + ' rows');
console.log('Showing the first 10 rows');
for (let x = 0; x < 10; x += 1) {
console.log(result.rows[x]);
}
} catch (err) {
console.error(err);
} finally {
if (conn) {
try {
await conn.close();
} catch (err) {
console.error(err);
}
}
}
}
runTest();
After I posted the solution above, I thought I should provide an alternative that keeps the keys going from Node.js to the DB "in memory". You'll have to run tests and review explain plans to see which is the best option for you (will depend on a number of factors).
Given these objects:
-- This is the same as before
create table t (
common_key number,
info varchar2(50)
);
-- Add 10,000 rows with keys 1-10,000 and random data for info
insert into t (common_key, info)
select rownum,
dbms_random.string('p', 50)
from dual
connect by rownum <= 10000;
-- But here, we use a nested table instead of a temp table
create or replace type number_ntt as table of number;
This should work:
const oracledb = require('oracledb');
const config = require('./dbConfig.js');
const startKey = 1000;
const length = 2000;
// Uses a promise to simulate async work.
function getListFromDatabaseA() {
return new Promise((resolve) => {
const list = [];
const count = length - startKey;
for (let x = 0; x < count; x += 1) {
list.push(startKey + x);
}
resolve(list);
});
}
async function runTest() {
let conn;
try {
const listFromA = await getListFromDatabaseA();
const binds = {
keys: {
type: oracledb.NUMBER,
dir: oracledb.BIND_IN,
val: listFromA
},
rs: {
type: oracledb.CURSOR,
dir: oracledb.BIND_OUT
}
};
conn = await oracledb.getConnection(config);
// Now get your common_key and info based on what's in the temp table.
let result = await conn.execute(
`declare
type number_aat is table of number index by pls_integer;
l_keys number_aat;
l_key_tbl number_ntt := number_ntt();
begin
-- Unfortunately, we have to bind in with this data type, but
-- it can't be used as a table...
l_keys := :keys;
-- So we'll transfer the data to another array type that can. This
-- variable's type was created at the schema level so that it could
-- be seen by the SQL engine.
for x in 1 .. l_keys.count
loop
l_key_tbl.extend();
l_key_tbl(l_key_tbl.count) := l_keys(x);
end loop;
open :rs for
select common_key, info
from t
where common_key in (
select column_value
from table(l_key_tbl)
)
order by common_key;
end;`,
binds
);
const resultSet = result.outBinds.rs;
console.log('Showing the first 10 rows');
for (x = 0; x < 10; x += 1) {
let row = await resultSet.getRow();
console.log(row);
}
} catch (err) {
console.error(err);
} finally {
if (conn) {
try {
await conn.close();
} catch (err) {
console.error(err);
}
}
}
}
runTest();
The formatting for the binds was different (a bit simpler here). Also, because I was executing PL/SQL, I needed to have an out bind cursor/result set type.
See this post regarding cardinality with nested tables:
http://www.oracle-developer.net/display.php?id=427
If you try both, please leave some feedback about which worked better.

Duplicate result of Hazealcast Predicate while querying with "in"

I query Hazelcast data map using Predicate with "in" condition like below.
List<String> liveVideoSourceIdList=new ArrayList(){"my_filter"};
Predicate predicate=Predicates.in("myField",liveVideoSourceIdList.toArray(new String[0]));
When i filter map with the created predicate,all the values are duplicated.
If i use "like" instead of "in" like below,no duplication happens. Is there any thoughts about this problem ?
Predicate predicate=Predicates.like("myField",liveVideoSourceIdList.get(0));
I have been unable to reproduce your issue, using 3.9-SNAPSHOT
which version are you using ?
I am using
int min = random.nextInt( keyDomain - range );
int max = min+range;
String values = new String();
for (int i = min; i < max ; i++) {
values += i+", ";
}
SqlPredicate sqlPredicate = new SqlPredicate("personId IN ("+values+")");
Collection<Personable> res = map.values(sqlPredicate);
if(res.size()!=range){
throw new AssertionException(sqlPredicate+" on map "+map.getName()+" returned "+res.size()+" expected "+range);
}
Set<Personable> set = new HashSet<Personable>(res);
if(res.size()!=set.size()){
throw new AssertionException(sqlPredicate+" on map "+map.getName()+" returned Duplicates");
}
for (Personable person : res) {
if(person.getPersonId() < min || person.getPersonId() >= max ){
throw new AssertionException(map.getName()+" "+person+" != "+sqlPredicate);
}
}

Good way to handle Q Promises in Waterline with Sails.js

I have a problem that I'm importing some data and each new row depends on previous row being added (since each row has its order attribute set based on current maximum order from other objects). The flow is that I first try to find object with the same name, if not found I first check maximum order and create new object with order + 1 from that query.
I tried doing this with Q promises which are available under Waterline. I tried using all method as well as combining queries with then from Q docs:
var result = Q(initialVal);
funcs.forEach(function (f) {
result = result.then(f);
});
return result;
But all objects had the same order, just like they would be executed in parallel, instead of waiting for the first chain to finish.
I finally found a solution with recurrency, but I doubt it's the best way of working with promises. Here's the code that works (+ needs some refactor and cleaning etc.), to show the rough idea:
function findOrCreateGroup(groupsQuery, index, callback) {
var groupName = groupsQuery[index];
Group.findOne({ 'name' : groupName }).then(function(group) {
if (!group) {
return Group.find().limit(1).sort('order DESC').then(function(foundGroups) {
var maxOrder = 0;
if (foundGroups.length > 0) {
maxOrder = foundGroups[0].order;
}
return Group.create({
'name' : groupName,
'order' : (maxOrder + 1)
}).then(function(g) {
dbGroups[g.name] = g;
if (index + 1 < groupsQuery.length) {
findOrCreateGroup(groupsQuery, index + 1, callback);
} else {
callback();
}
return g;
});
});
} else {
dbGroups[group.name] = group;
if (index + 1 < groupsQuery.length) {
findOrCreateGroup(groupsQuery, index + 1, callback);
} else {
callback();
}
return group;
}
});
}

how to run stored procedure from groovy that returns multiple resultsets

I couldnt find any good example of doing this online.
Can someone please show how to run a stored procedure (that returns multiple resultsets) from groovy?
Basically I am just trying to determine how many resultsets the stored procedure returns..
I have written a helper which allows me to work with stored procedures that return a single ResultSet in a way that is similar to working with queries with groovy.sql.Sql. This could easily be adapted to process multiple ResultSets (I assume each would need it's own closure).
Usage:
Sql sql = Sql.newInstance(dataSource)
SqlHelper helper = new SqlHelper(sql);
helper.eachSprocRow('EXEC sp_my_sproc ?, ?, ?', ['a', 'b', 'c']) { row ->
println "foo=${row.foo}, bar=${row.bar}, baz=${row.baz}"
}
Code:
class SqlHelper {
private Sql sql;
SqlHelper(Sql sql) {
this.sql = sql;
}
public void eachSprocRow(String query, List parameters, Closure closure) {
sql.cacheConnection { Connection con ->
CallableStatement proc = con.prepareCall(query)
try {
parameters.eachWithIndex { param, i ->
proc.setObject(i+1, param)
}
boolean result = proc.execute()
boolean found = false
while (!found) {
if (result) {
ResultSet rs = proc.getResultSet()
ResultSetMetaData md = rs.getMetaData()
int columnCount = md.getColumnCount()
while (rs.next()) {
// use case insensitive map
Map row = new TreeMap(String.CASE_INSENSITIVE_ORDER)
for (int i = 0; i < columnCount; ++ i) {
row[md.getColumnName(i+1)] = rs.getObject(i+1)
}
closure.call(row)
}
found = true;
} else if (proc.getUpdateCount() < 0) {
throw new RuntimeException("Sproc ${query} did not return a result set")
}
result = proc.getMoreResults()
}
} finally {
proc.close()
}
}
}
}
All Java classes are usable from Groovy. If Groovy does not give you a way to do it, then you can do it Java-way using JDBC callable statements.
I just stumbled across what could possibly be a solution to your problem, if an example was what you were after, have a look at the reply to this thread

Resources