AQL: Dynamic query with nested array dates - arangodb

I have been trying to get this dynamic query to work with dates as shown below in ArangoDB 3.1.
This works perfectly when I'm not trying to query dates, but returns an empty list as soon as I try to query with a date like below...
{
query:
'For c IN ##collectionName
FILTER ( c.#p0 == #v0 AND c.#p1 >= #v1 AND c.#p2 <= #v2 )
LIMIT #count RETURN c ',
bindVars: {
'#collectionName': 'Event',
p0: 'isPublished',
v0: true,
p1: 'dates[*].startsAt',
v1: '2018-06-01T04:00:00.000Z',
p2: 'dates[*].startsAt',
v2: '2018-07-01T03:59:59.999Z',
count: 9
}
}
Need some help getting past this

There are mistakes in your query, but they are actually not related to dates:
dates[*].startsAt is not a valid attribute path, but a shorthand expression for FOR date IN dates RETURN date.startsAt, which returns an array
The comparison operator >= does not work on arrays as you may think. null, true, false and every number and string are less than any array, see Type and Value Order. Your timestamp array will always be greater than any given timestamp string. What you probably want instead is an array comparison operator like ALL >=.
An expression dates[*].startsAt can not be used as bind parameter. With a document structure without array like { "date": { "startsAt": "..." } } it would be perfectly fine to bind ["date", "startsAt"] as p1 or p2. Note how the bind parameter value is an array of strings. "date.startsAt" on the other hand would describe the path for a top-level attribute
{ "date.startsAt": ... } and not a nested attribute startsAt of the top-level attribute date like { "date": { "startsAt": ... } }.
What your do with dates[*].startsAt is describing a top-level attribute like
{ "dates[*].startsAt": ... }, which does not exist. ["dates[*]", "startsAt"] does not work either. If you want to use the array expansion expression, then you have to write it like c.#p1a[*].#p1b in your query and use the bind parameters { "p1a": "dates", "p2a": "startsAt" }.
Query:
FOR c IN ##collectionName
FILTER c.#p0 == #v0
FILTER c.#p1a[*].#p1b ALL >= #v1
FILTER c.#p2a[*].#p2b ALL < #v2
LIMIT #count
RETURN c
bindVars:
{
"#collectionName": "Event",
"p0": "isPublished",
"v0": true,
"p1a": "dates",
"p1b": "startsAt",
"v1": "2018-06-01T04:00:00.000Z",
"p2a": "dates",
"p2b": "startsAt",
"v2": "2018-07-01T04:00:00.000Z",
"count": 9
}

Related

Replace multiple ids in a path after validation [Golang]

I have a path delimited by /and an expectation to have multiple ids I want to replace with a constant value. The issue I am facing is that once it validates the first id, it performs the replacement and stops. My assumption is that I should be having some sort of do while in Golang (see this resource - I know there is no such a construct in Go) and I have attempted at using:
for true {
// do something
}
but still only the first id is replaced. Any idea? Thank you
Here is my Go Playground example with the original implementation
The problem is you early return after first match. Since you iterating parts of path you should assign result of strings.Replace() to a variable and return after for loop. Assign to path instead of returning and it should work as expected.
func SubstituteById(path string, repl string) string {
ids := strings.Split(path, "/")
for _, id := range ids {
if fastuuid.ValidHex128(id) {
path = strings.Replace(path, id, repl, -1)
}
}
return path
}

mongoDB - $lookup - code 4570: arguments to $lookup must be strings

I'm trying to write a Mongodb aggregation that do the following:
I have two collections, where there are two fields that apear in both collections, and I want to write a query that adds based on wether both of the fields are equal in both collections, a field that exists only in one of them.
Say that the first collection is A and the second is B. Say that these are the fields in both of them:
A: x, y, z, w.
B: x, y, u.
I want to add u to A, based on where x, y are both the same in A and B.
if this is a record in A: x=1, y=2, z=3, w=4 and B: x=1, y=2, u=6, I want:
A: x=1, y=2, z=3, w=4, u=6. (because x, y are equal in both collections).
I wrote the following:
db.A.aggregate([
{
"$lookup":{
"from":"B",
"let":{
"x":"$x",
"y":"$y"
},
"pipeline":[
{
"$match":{
"$expr":{
"$and":[
{
"$eq":[
"$x",
"$$x"
]
},
{
"$eq":[
"$y",
"$$y"
]
}
]
}
}
}
],
"as":"res"
}
}
])
I need to add a project part, but the problem is that i get the following error (means res is not working):
"message" : "arguments to $lookup must be strings, let: { x: '$x', y: '$y' } is type object"
and code 4570
Notes:
I use mongo version 4.4.5
x in my db is of type string and y is of type int32. I tried to convert y to string, this is not the problem.
If someone knows how to help I would appreciate that.

How to write a mango filter on the size of an array?

I would like to find documents with a mango query by specifying the minimal and maximal size of an array property. Given a document with an array property customers. I'd like to be able to find all documents with the number of customers between 10 and 20.
Something like
mango_query = {
"doc.customers": {"$size": {"gte": 10}},
"doc.customers": {"$size": {"lte": 20}}
}
The response to a request like that is
Bad argument for operator $size: {[{<<36,108,116,101>>,10}]}')
So how should I write a mango filter on the size of an array?
Checking the code here, only integer argument is supported for $size operator. So it can't be combined with other operators. It supports only $size exact matches.
norm_ops({[{<<"$size">>, Arg}]}) when is_integer(Arg), Arg >= 0 ->
{[{<<"$size">>, Arg}]};
norm_ops({[{<<"$size">>, Arg}]}) ->
?MANGO_ERROR({bad_arg, '$size', Arg});
And when matching
match({[{<<"$size">>, Arg}]}, Values, _Cmp) when is_list(Values) ->
length(Values) == Arg;
match({[{<<"$size">>, _}]}, _Value, _Cmp) ->
false;
length(Values) == Arg Only exact match is supported

How to represent map with Groovy collectMaps

I have a Java code that looks like below code:
for(MyClass myclassObject: input.classes()) {
if(myclassObject.getName().equals("Tom")) {
outputMap.put("output", myclassObject.getAge())
}
}
How do I efficiently write this with Groovy collectmap?
I can do
input.classes().collectEntries["output":it.getAge()] But how do I include the if condition on it?
you could use findAll to keep only items according to condition
and after then apply collectEntries to transform items found
#groovy.transform.ToString
class MyClass{
int age
String name
}
def classes = [
new MyClass(age:11, name:'Tom'),
new MyClass(age:12, name:'Jerry'),
]
classes.findAll{it.getName()=='Tom'}.collectEntries{ [output:it.getAge()] }
Since your resulting map is only retaining one value anyway, you can also just do this:
input.classes().findResult { it.name == 'Tom' ? [output: it.age] : null }
where findResult will return the first item in classes() for which the closure:
{ it.name == 'Tom' ? [output: it.age] : null }
returns a non-null value.
Since you mentioned efficiency in your question: this is more efficient than going through the whole collection using collectEntries or findAll since findResult returns directly on finding the first instance of it.name == 'Tom'.
Which way to go really depends on your requirements.
collectEntries can take a closure as a parameter. You can apply your logic inside the closure and make sure you return the Map Entry when condition passes and return an empty map when condition fails. Therefore;
input.classes().collectEntries { MyClass myClassObject ->
myClassObject.name == 'Tom' ? ['output': myClassObject.getAge()] : [:]
}
However, with your approach there is a caveat. Since you are using the key as output and Map does not allow duplicate keys, you will always end up with the last entry in the map. You have to come up with a better plan if that is not your intention.

How do I write queries with a null constraint in pg-promise properly?

When writing Postgres queries, constraints are usually written like
WHERE a = $(a) or WHERE b IN $(b:csv) if you know it's a list. However, if a value is null, the constraint would have to be written WHERE x IS NULL. Is it possible to get the query to auto-format if the value is null or not?
Say I might want to find rows WHERE c = 1. If I know c is 1, I write the query like
db.oneOrNone(`SELECT * FROM blah WHERE c = $(c), { c })
But if c turns out to be null, the query would have to become ...WHERE c IS NULL.
Would it be possible to construct a general query like WHERE $(c), and it would automatically format to WHERE c = 1 if c is 1, and WHERE c IS NULL if c is set to null?
You can use Custom Type Formatting to help with dynamic queries:
const valueOrNull = (col, value) => ({
rawType: true,
toPostgres: () => pgp.as.format(`$1:name ${value === null ? 'IS NULL' : '= $2'}`,
[col, value])
});
Then you can pass it in as a formatting value:
db.oneOrNone('SELECT * FROM blah WHERE $[cnd]', { cnd: valueOrNull('col', 123) })
UPDATE
Or you can use custom formatting just for the value itself:
const eqOrNull = value => ({
rawType: true,
toPostgres: () => pgp.as.format(`${value === null ? 'IS NULL' : '= $1'}`, value)
});
usage examples:
db.oneOrNone('SELECT * FROM blah WHERE $1:name $2', ['col', eqOrNull(123)])
//=> SELECT * FROM blah WHERE "col" = 123
db.oneOrNone('SELECT * FROM blah WHERE $1:name $2', ['col', eqOrNull(null)])
//=> SELECT * FROM blah WHERE "col" IS NULL
Note that for simplicity I didn't include check for undefined, but you most likely will do so, because undefined is internally formatted as null also.
A very useful alternative to modifying the query depending on whether the value is NULL is to use IS [NOT] DISTINCT FROM. From the reference:
For non-null inputs, IS DISTINCT FROM is the same as the <> operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly, IS NOT DISTINCT FROM is identical to = for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these predicates effectively act as though null were a normal data value, rather than “unknown”.
In short, instead of =, use IS NOT DISTINCT FROM, and instead of <>, use IS DISTINCT FROM.
This becomes especially useful when comparing two columns, either of which may be null.
Note that IS [NOT] DISTINCT FROM cannot use indexes, so certain queries may perform poorly.

Resources