gcloud datastore: Can I filter with IN or Contains operator? - python-3.x

I am a new bee with the Datastore gCloud. And I want to filter in an entity called Score all the scores that have relation with a list of companies. My entity is formed as follows:
{
    "company_id": 1,
    "score": 100,
}
I have several entities with different company IDs. I tried to filter using the query.add_filter command but got the error ValueError:
('Invalid expression: "IN"', 'Please use one of: =, <, <=,>,> =.')
The reason for the error is very clear to me, but I have not found anything in the documentation on how to run filters with IN or CONTAINS, in addition to types other than those listed in the above error message.

The filters operators you seek are not supported by the datastore. The only supported ones are listed in Filters:
The comparison operator can be any of the following:
Operator Meaning
EQUAL Equal to
LESS_THAN Less than
LESS_THAN_OR_EQUAL Less than or equal to
GREATER_THAN Greater than
GREATER_THAN_OR_EQUAL Greater than or equal to
Some of the client libraries may offer some equivalents, but with limitations. For example the standard environment GAE-specific ndb library (not usable in your context) offers such support. The example from The != and IN Operations can be useful as you can implement it in a similar manner in your code:
Similarly, the IN operation
property IN [value1, value2, ...]
which tests for membership in a list of possible values, is
implemented as
(property == value1) OR (property == value2) OR ...

Related

django filter with related name [duplicate]

What is the difference between filter with multiple arguments and chain filter in django?
As you can see in the generated SQL statements the difference is not the "OR" as some may suspect. It is how the WHERE and JOIN is placed.
Example1 (same joined table): from https://docs.djangoproject.com/en/dev/topics/db/queries/#spanning-multi-valued-relationships
Blog.objects.filter(
entry__headline__contains='Lennon',
entry__pub_date__year=2008)
This will give you all the Blogs that have one entry with both (entry__headline__contains='Lennon') AND (entry__pub_date__year=2008), which is what you would expect from this query.
Result:
Blog with {entry.headline: 'Life of Lennon', entry.pub_date: '2008'}
Example 2 (chained)
Blog.objects.filter(
entry__headline__contains='Lennon'
).filter(
entry__pub_date__year=2008)
This will cover all the results from Example 1, but it will generate slightly more result. Because it first filters all the blogs with (entry__headline__contains='Lennon') and then from the result filters (entry__pub_date__year=2008).
The difference is that it will also give you results like:
A single Blog with multiple entries
{entry.headline: '**Lennon**', entry.pub_date: 2000},
{entry.headline: 'Bill', entry.pub_date: **2008**}
When the first filter was evaluated the book is included because of the first entry (even though it has other entries that don't match). When the second filter is evaluated the book is included because of the second entry.
One table: But if the query doesn't involve joined tables like the example from Yuji and DTing. The result is same.
The case in which results of "multiple arguments filter-query" is different than "chained-filter-query", following:
Selecting referenced objects on the basis of referencing objects and relationship is one-to-many (or many-to-many).
Multiple filters:
Referenced.filter(referencing1_a=x, referencing1_b=y)
# same referencing model ^^ ^^
Chained filters:
Referenced.filter(referencing1_a=x).filter(referencing1_b=y)
Both queries can output different result:
If more then one
rows in referencing-modelReferencing1can refer to same row in
referenced-modelReferenced. This can be the case in Referenced:
Referencing1 have either 1:N (one to many) or N:M (many to many)
relation-ship.
Example:
Consider my application my_company has two models Employee and Dependent. An employee in my_company can have more than dependents(in other-words a dependent can be son/daughter of a single employee, while a employee can have more than one son/daughter).
Ehh, assuming like husband-wife both can't work in a my_company. I took 1:m example
So, Employee is referenced-model that can be referenced by more then Dependent that is referencing-model. Now consider relation-state as follows:
Employee: Dependent:
+------+ +------+--------+-------------+--------------+
| name | | name | E-name | school_mark | college_mark |
+------+ +------+--------+-------------+--------------+
| A | | a1 | A | 79 | 81 |
| B | | b1 | B | 80 | 60 |
+------+ | b2 | B | 68 | 86 |
+------+--------+-------------+--------------+
Dependenta1refers to employeeA, and dependentb1, b2references to employeeB.
Now my query is:
Find all employees those having son/daughter has distinction marks (say >= 75%) in both college and school?
>>> Employee.objects.filter(dependent__school_mark__gte=75,
... dependent__college_mark__gte=75)
[<Employee: A>]
Output is 'A' dependent 'a1' has distinction marks in both college and school is dependent on employee 'A'. Note 'B' is not selected because nether of 'B''s child has distinction marks in both college and school. Relational algebra:
Employee ⋈(school_mark >=75 AND college_mark>=75)Dependent
In Second, case I need a query:
Find all employees whose some of dependents has distinction marks in college and school?
>>> Employee.objects.filter(
... dependent__school_mark__gte=75
... ).filter(
... dependent__college_mark__gte=75)
[<Employee: A>, <Employee: B>]
This time 'B' also selected because 'B' has two children (more than one!), one has distinction mark in school 'b1' and other is has distinction mark in college 'b2'.
Order of filter doesn't matter we can also write above query as:
>>> Employee.objects.filter(
... dependent__college_mark__gte=75
... ).filter(
... dependent__school_mark__gte=75)
[<Employee: A>, <Employee: B>]
result is same! Relational algebra can be:
(Employee ⋈(school_mark >=75)Dependent) ⋈(college_mark>=75)Dependent
Note following:
dq1 = Dependent.objects.filter(college_mark__gte=75, school_mark__gte=75)
dq2 = Dependent.objects.filter(college_mark__gte=75).filter(school_mark__gte=75)
Outputs same result: [<Dependent: a1>]
I check target SQL query generated by Django using print qd1.query and print qd2.query both are same(Django 1.6).
But semantically both are different to me. first looks like simple section σ[school_mark >= 75 AND college_mark >= 75](Dependent) and second like slow nested query: σ[school_mark >= 75](σ[college_mark >= 75](Dependent)).
If one need Code #codepad
btw, it is given in documentation #Spanning multi-valued relationships I have just added an example, I think it will be helpful for someone new.
Most of the time, there is only one possible set of results for a query.
The use for chaining filters comes when you are dealing with m2m:
Consider this:
# will return all Model with m2m field 1
Model.objects.filter(m2m_field=1)
# will return Model with both 1 AND 2
Model.objects.filter(m2m_field=1).filter(m2m_field=2)
# this will NOT work
Model.objects.filter(Q(m2m_field=1) & Q(m2m_field=2))
Other examples are welcome.
This answer is based on Django 3.1.
Environment
Models
class Blog(models.Model):
blog_id = models.CharField()
class Post(models.Model):
blog_id = models.ForeignKeyField(Blog)
title = models.CharField()
pub_year = models.CharField() # Don't use CharField for date in production =]
Database tables
Filters call
Blog.objects.filter(post__title="Title A", post__pub_year="2020")
# Result: <QuerySet [<Blog: 1>]>
Blog.objects.filter(post__title="Title A").filter(post_pub_date="2020")
# Result: <QuerySet [<Blog: 1>, [<Blog: 2>]>
Explanation
Before I start anything further, I have to notice that this answer is based on the situation that uses "ManyToManyField" or a reverse "ForeignKey" to filter objects.
If you are using the same table or an "OneToOneField" to filter objects, then there will be no difference between using a "Multiple Arguments Filter" or "Filter-chain". They both will work like a "AND" condition filter.
The straightforward way to understand how to use "Multiple Arguments Filter" and "Filter-chain" is to remember in a "ManyToManyField" or a reverse "ForeignKey" filter, "Multiple Arguments Filter" is an "AND" condition and "Filter-chain" is an "OR" condition.
The reason that makes "Multiple Arguments Filter" and "Filter-chain" so different is that they fetch results from different join tables and use different conditions in the query statement.
"Multiple Arguments Filter" use "Post"."Public_Year" = '2020' to identify the public year
SELECT *
FROM "Book"
INNER JOIN ("Post" ON "Book"."id" = "Post"."book_id")
WHERE "Post"."Title" = 'Title A'
AND "Post"."Public_Year" = '2020'
"Filter-chain" database query use "T1"."Public_Year" = '2020' to identify the public year
SELECT *
FROM "Book"
INNER JOIN "Post" ON ("Book"."id" = "Post"."book_id")
INNER JOIN "Post" T1 ON ("Book"."id" = "T1"."book_id")
WHERE "Post"."Title" = 'Title A'
AND "T1"."Public_Year" = '2020'
But why do different conditions impact the result?
I believe most of us who come to this page, including me =], have the same assumption while using "Multiple Arguments Filter" and "Filter-chain" at first.
Which we believe the result should be fetched from a table like following one which is correct for "Multiple Arguments Filter". So if you are using the "Multiple Arguments Filter", you will get a result as your expectation.
But while dealing with the "Filter-chain", Django creates a different query statement which changes the above table to the following one. Also, the "Public Year" is identified under the "T1" section instead of the "Post" section because of the query statement change.
But where does this weird "Filter-chain" join table diagram come from?
I'm not a database expert. The explanation below is what I understand so far after I created the exact structure of the database and made a test with the same query statement.
The following diagram will show where this weird "Filter-chain" join table diagram comes from.
The database will first create a join table by matching the row of the "Blog" and "Post" tables one by one.
After that, the database now does the same matching process again but uses the step 1 result table to match the "T1" table which is just the same "Post" table.
And this is where this weird "Filter-chain" join table diagram comes from.
Conclusion
So two things make "Multiple Arguments Filter" and "Filter-chain" different.
Django create different query statements for "Multiple Arguments Filter" and "Filter-chain" which make "Multiple Arguments Filter" and "Filter-chain" result come from other tables.
"Filter-chain" query statement identifies a condition from a different place than "Multiple Arguments Filter".
The dirty way to remember how to use it is "Multiple Arguments Filter" is an "AND" condition and "Filter-chain" is an "OR" condition while in a "ManyToManyField" or a reverse "ForeignKey" filter.
The performance difference is huge. Try it and see.
Model.objects.filter(condition_a).filter(condition_b).filter(condition_c)
is surprisingly slow compared to
Model.objects.filter(condition_a, condition_b, condition_c)
As mentioned in Effective Django ORM,
QuerySets maintain state in memory
Chaining triggers cloning, duplicating that state
Unfortunately, QuerySets maintain a lot of state
If possible, don’t chain more than one filter
You can use the connection module to see the raw sql queries to compare. As explained by Yuji's, for the most part they are equivalent as shown here:
>>> from django.db import connection
>>> samples1 = Unit.objects.filter(color="orange", volume=None)
>>> samples2 = Unit.objects.filter(color="orange").filter(volume=None)
>>> list(samples1)
[]
>>> list(samples2)
[]
>>> for q in connection.queries:
... print q['sql']
...
SELECT `samples_unit`.`id`, `samples_unit`.`color`, `samples_unit`.`volume` FROM `samples_unit` WHERE (`samples_unit`.`color` = orange AND `samples_unit`.`volume` IS NULL)
SELECT `samples_unit`.`id`, `samples_unit`.`color`, `samples_unit`.`volume` FROM `samples_unit` WHERE (`samples_unit`.`color` = orange AND `samples_unit`.`volume` IS NULL)
>>>
If you end up on this page looking for how to dynamically build up a django queryset with multiple chaining filters, but you need the filters to be of the AND type instead of OR, consider using Q objects.
An example:
# First filter by type.
filters = None
if param in CARS:
objects = app.models.Car.objects
filters = Q(tire=param)
elif param in PLANES:
objects = app.models.Plane.objects
filters = Q(wing=param)
# Now filter by location.
if location == 'France':
filters = filters & Q(quay=location)
elif location == 'England':
filters = filters & Q(harbor=location)
# Finally, generate the actual queryset
queryset = objects.filter(filters)
If requires a and b then
and_query_set = Model.objects.filter(a=a, b=b)
if requires a as well as b then
chaied_query_set = Model.objects.filter(a=a).filter(b=b)
Official Documents:
https://docs.djangoproject.com/en/dev/topics/db/queries/#spanning-multi-valued-relationships
Related Post: Chaining multiple filter() in Django, is this a bug?
There is a difference when you have request to your related object,
for example
class Book(models.Model):
author = models.ForeignKey(Author)
name = models.ForeignKey(Region)
class Author(models.Model):
name = models.ForeignKey(Region)
request
Author.objects.filter(book_name='name1',book_name='name2')
returns empty set
and request
Author.objects.filter(book_name='name1').filter(book_name='name2')
returns authors that have books with both 'name1' and 'name2'
for details look at
https://docs.djangoproject.com/en/dev/topics/db/queries/#s-spanning-multi-valued-relationships

ArangoDB: Traversal condition on related document

Been stuck for days with this concern, trying to accomplish this:
See the provided picture.
The black is the start vertex. Trying to get:
1: All child parts OUTBOUND (from) the start vertex
2: Condition: The children MUST have the INBOUND edge"types" and the other end a document with a variable set to "true" and of the type "type".
3: When a document of type "part" fails to met up the requirements with INBOUND document of type "type" with a attribute to "true", it stops the expand for that path then and there.
4: The documents who failed isn't included in the result either.
5: Should be compatible with any depths.
6: No subqueries (if possible without).
Example of graph
With the given information, the data model seems questionable. Why are there true and false vertices instead of a boolean edge attribute per partScrew? Is there a reason why it is modeled like this?
Using this data model, I don't see how this would be possible without subqueries. The traversal down a path can be stopped early with PRUNE, but that does not support subqueries. That only leaves FILTER for post-filtering as option, but be careful, you need to check all vertices on the path and not just the emitted vertex whether it has an inbound false type.
Not sure if it works as expected in all cases, but here is what I came up with and the query result, which looks good to me:
LET startScrew = FIRST(FOR doc IN screw LIMIT 1 RETURN doc) // Screw A
FOR v,e,p IN 1..2 OUTBOUND startScrew partScrew
FILTER (
FOR v_id IN SHIFT(p.vertices[*]._id) // ignore start vertex
FOR v2 IN 1..1 INBOUND v_id types
RETURN v2.value
) NONE == false
RETURN {
path: CONCAT_SEPARATOR(" -- ", p.vertices[*].value)
}
path
Screw A -- Part D
Screw A -- Part E
Screw A -- Part E -- Part F
Dump with test data: https://gist.github.com/Simran-B/6bd9b154d1d1e2e74638caceff42c44f

Count distinct nodes from traversal in AQL

I am able to get all distinct nodes from a query, but not the count:
FOR v in 2..2 OUTBOUND "starting_node" GRAPH "some_graph"
return DISTINCT v._key
I want to get only the count of the result. I tried to use LENGTH(DISTINCT v._key) as suggested in the docs, but it's not a proper syntax of the AQL:
syntax error, unexpected DISTINCT modifier near 'DISTINCT v._key)'
The naive solution is to get all keys and count it on the client side, but I am curious how to do it on the server side?
What RETURN DISTINCT does is to remove duplicate values, but only after the traversal.
You can set traversal options to eliminate paths during the traversal, which can be more efficient especially if you have a highly interconnected graph and a high traversal depth:
RETURN LENGTH(
FOR v IN 2..2 OUTBOUND "starting_node" GRAPH "some_graph"
OPTIONS { uniqueVertices: "global", bfs: true }
RETURN v._key
)
The traversal option uniqueVertices can be set to "global" so that you don't get the same vertex returned twice from this traversal. The option for breadth-first search bfs needs to be enabled to use uniqueVertices: "global". The reason why depth-first search does not support this uniqueness option is that the result would not be deterministic, hence this combination was disabled.
Inspired by this blogpost http://jsteemann.github.io/blog/2014/12/12/aql-improvements-for-24/ I prepared the solution using LET:
LET result = (FOR v in 2..2 OUTBOUND "starting_node" GRAPH "some_graph"
return DISTINCT v._key)
RETURN LENGTH(result)
It might be not optimal solution, but it works as I expected.

Mongoose/ Mongo: why does $set auto-sort the data keys on update? [duplicate]

If I create an object like this:
var obj = {};
obj.prop1 = "Foo";
obj.prop2 = "Bar";
Will the resulting object always look like this?
{ prop1 : "Foo", prop2 : "Bar" }
That is, will the properties be in the same order that I added them?
The iteration order for objects follows a certain set of rules since ES2015, but it does not (always) follow the insertion order. Simply put, the iteration order is a combination of the insertion order for strings keys, and ascending order for number-like keys:
// key order: 1, foo, bar
const obj = { "foo": "foo", "1": "1", "bar": "bar" }
Using an array or a Map object can be a better way to achieve this. Map shares some similarities with Object and guarantees the keys to be iterated in order of insertion, without exception:
The keys in Map are ordered while keys added to object are not. Thus, when iterating over it, a Map object returns keys in order of insertion. (Note that in the ECMAScript 2015 spec objects do preserve creation order for string and Symbol keys, so traversal of an object with ie only string keys would yield keys in order of insertion)
As a note, properties order in objects weren’t guaranteed at all before ES2015. Definition of an Object from ECMAScript Third Edition (pdf):
4.3.3 Object
An object is a member of the
type Object. It is an unordered collection of properties each of which
contains a primitive value, object, or
function. A function stored in a
property of an object is called a
method.
YES (but not always insertion order).
Most Browsers iterate object properties as:
Positive integer keys in ascending order (and strings like "1" that parse as ints)
String keys, in insertion order (ES2015 guarantees this and all browsers comply)
Symbol names, in insertion order (ES2015 guarantees this and all browsers comply)
Some older browsers combine categories #1 and #2, iterating all keys in insertion order. If your keys might parse as integers, it's best not to rely on any specific iteration order.
Current Language Spec (since ES2015) insertion order is preserved, except in the case of keys that parse as positive integers (eg "7" or "99"), where behavior varies between browsers. For example, Chrome/V8 does not respect insertion order when the keys are parse as numeric.
Old Language Spec (before ES2015): Iteration order was technically undefined, but all major browsers complied with the ES2015 behavior.
Note that the ES2015 behavior was a good example of the language spec being driven by existing behavior, and not the other way round. To get a deeper sense of that backwards-compatibility mindset, see http://code.google.com/p/v8/issues/detail?id=164, a Chrome bug that covers in detail the design decisions behind Chrome's iteration order behavior.
Per one of the (rather opinionated) comments on that bug report:
Standards always follow implementations, that's where XHR came from, and Google does the same thing by implementing Gears and then embracing equivalent HTML5 functionality. The right fix is to have ECMA formally incorporate the de-facto standard behavior into the next rev of the spec.
Property order in normal Objects is a complex subject in JavaScript.
While in ES5 explicitly no order has been specified, ES2015 defined an order in certain cases, and successive changes to the specification since have increasingly defined the order (even, as of ES2020, the for-in loop's order). Given is the following object:
const o = Object.create(null, {
m: {value: function() {}, enumerable: true},
"2": {value: "2", enumerable: true},
"b": {value: "b", enumerable: true},
0: {value: 0, enumerable: true},
[Symbol()]: {value: "sym", enumerable: true},
"1": {value: "1", enumerable: true},
"a": {value: "a", enumerable: true},
});
This results in the following order (in certain cases):
Object {
0: 0,
1: "1",
2: "2",
b: "b",
a: "a",
m: function() {},
Symbol(): "sym"
}
The order for "own" (non-inherited) properties is:
Positive integer-like keys in ascending order
String keys in insertion order
Symbols in insertion order
Thus, there are three segments, which may alter the insertion order (as happened in the example). And positive integer-like keys don't stick to the insertion order at all.
In ES2015, only certain methods followed the order:
Object.assign
Object.defineProperties
Object.getOwnPropertyNames
Object.getOwnPropertySymbols
Reflect.ownKeys
JSON.parse
JSON.stringify
As of ES2020, all others do (some in specs between ES2015 and ES2020, others in ES2020), which includes:
Object.keys, Object.entries, Object.values, ...
for..in
The most difficult to nail down was for-in because, uniquely, it includes inherited properties. That was done (in all but edge cases) in ES2020. The following list from the linked (now completed) proposal provides the edge cases where the order is not specified:
Neither the object being iterated nor anything in its prototype chain is a proxy, typed array, module namespace object, or host exotic object.
Neither the object nor anything in its prototype chain has its prototype change during iteration.
Neither the object nor anything in its prototype chain has a property deleted during iteration.
Nothing in the object's prototype chain has a property added during iteration.
No property of the object or anything in its prototype chain has its enumerability change during iteration.
No non-enumerable property shadows an enumerable one.
Conclusion: Even in ES2015 you shouldn't rely on the property order of normal objects in JavaScript. It is prone to errors. If you need ordered named pairs, use Map instead, which purely uses insertion order. If you just need order, use an array or Set (which also uses purely insertion order).
At the time of writing, most browsers did return properties in the same order as they were inserted, but it was explicitly not guaranteed behaviour so shouldn't have been relied upon.
The ECMAScript specification used to say:
The mechanics and order of enumerating the properties ... is not specified.
However in ES2015 and later non-integer keys will be returned in insertion order.
This whole answer is in the context of spec compliance, not what any engine does at a particular moment or historically.
Generally, no
The actual question is very vague.
will the properties be in the same order that I added them
In what context?
The answer is: it depends on a number of factors. In general, no.
Sometimes, yes
Here is where you can count on property key order for plain Objects:
ES2015 compliant engine
Own properties
Object.getOwnPropertyNames(), Reflect.ownKeys(), Object.getOwnPropertySymbols(O)
In all cases these methods include non-enumerable property keys and order keys as specified by [[OwnPropertyKeys]] (see below). They differ in the type of key values they include (String and / or Symbol). In this context String includes integer values.
Object.getOwnPropertyNames(O)
Returns O's own String-keyed properties (property names).
Reflect.ownKeys(O)
Returns O's own String- and Symbol-keyed properties.
Object.getOwnPropertySymbols(O)
Returns O's own Symbol-keyed properties.
[[OwnPropertyKeys]]
The order is essentially: integer-like Strings in ascending order, non-integer-like Strings in creation order, Symbols in creation order. Depending which function invokes this, some of these types may not be included.
The specific language is that keys are returned in the following order:
... each own property key P of O [the object being iterated] that is an integer index, in ascending numeric index order
... each own property key P of O that is a String but is not an integer index, in property creation order
... each own property key P of O that is a Symbol, in property creation order
Map
If you're interested in ordered maps you should consider using the Map type introduced in ES2015 instead of plain Objects.
As of ES2015, property order is guaranteed for certain methods that iterate over properties. but not others. Unfortunately, the methods which are not guaranteed to have an order are generally the most often used:
Object.keys, Object.values, Object.entries
for..in loops
JSON.stringify
But, as of ES2020, property order for these previously untrustworthy methods will be guaranteed by the specification to be iterated over in the same deterministic manner as the others, due to to the finished proposal: for-in mechanics.
Just like with the methods which have a guaranteed iteration order (like Reflect.ownKeys and Object.getOwnPropertyNames), the previously-unspecified methods will also iterate in the following order:
Numeric array keys, in ascending numeric order
All other non-Symbol keys, in insertion order
Symbol keys, in insertion order
This is what pretty much every implementation does already (and has done for many years), but the new proposal has made it official.
Although the current specification leaves for..in iteration order "almost totally unspecified, real engines tend to be more consistent:"
The lack of specificity in ECMA-262 does not reflect reality. In discussion going back years, implementors have observed that there are some constraints on the behavior of for-in which anyone who wants to run code on the web needs to follow.
Because every implementation already iterates over properties predictably, it can be put into the specification without breaking backwards compatibility.
There are a few weird cases which implementations currently do not agree on, and in such cases, the resulting order will continue be unspecified. For property order to be guaranteed:
Neither the object being iterated nor anything in its prototype chain is a proxy, typed array, module namespace object, or host exotic object.
Neither the object nor anything in its prototype chain has its prototype change during iteration.
Neither the object nor anything in its prototype chain has a property deleted during iteration.
Nothing in the object's prototype chain has a property added during iteration.
No property of the object or anything in its prototype chain has its enumerability change during iteration.
No non-enumerable property shadows an enumerable one.
In modern browsers you can use the Map data structure instead of a object.
Developer mozilla > Map
A Map object can iterate its elements in insertion order...
In ES2015, it does, but not to what you might think
The order of keys in an object wasn't guaranteed until ES2015. It was implementation-defined.
However, in ES2015 in was specified. Like many things in JavaScript, this was done for compatibility purposes and generally reflected an existing unofficial standard among most JS engines (with you-know-who being an exception).
The order is defined in the spec, under the abstract operation OrdinaryOwnPropertyKeys, which underpins all methods of iterating over an object's own keys. Paraphrased, the order is as follows:
All integer index keys (stuff like "1123", "55", etc) in ascending numeric order.
All string keys which are not integer indices, in order of creation (oldest-first).
All symbol keys, in order of creation (oldest-first).
It's silly to say that the order is unreliable - it is reliable, it's just probably not what you want, and modern browsers implement this order correctly.
Some exceptions include methods of enumerating inherited keys, such as the for .. in loop. The for .. in loop doesn't guarantee order according to the specification.
As others have stated, you have no guarantee as to the order when you iterate over the properties of an object. If you need an ordered list of multiple fields I suggested creating an array of objects.
var myarr = [{somfield1: 'x', somefield2: 'y'},
{somfield1: 'a', somefield2: 'b'},
{somfield1: 'i', somefield2: 'j'}];
This way you can use a regular for loop and have the insert order. You could then use the Array sort method to sort this into a new array if needed.
Major Difference between Object and MAP with Example :
it's Order of iteration in loop, In Map it follows the order as it was set while creation whereas in OBJECT does not.
SEE:
OBJECT
const obj = {};
obj.prop1 = "Foo";
obj.prop2 = "Bar";
obj['1'] = "day";
console.log(obj)
**OUTPUT: {1: "day", prop1: "Foo", prop2: "Bar"}**
MAP
const myMap = new Map()
// setting the values
myMap.set("foo", "value associated with 'a string'")
myMap.set("Bar", 'value associated with keyObj')
myMap.set("1", 'value associated with keyFunc')
OUTPUT:
**1. ▶0: Array[2]
1. 0: "foo"
2. 1: "value associated with 'a string'"
2. ▶1: Array[2]
1. 0: "Bar"
2. 1: "value associated with keyObj"
3. ▶2: Array[2]
1. 0: "1"
2. 1: "value associated with keyFunc"**
Just found this out the hard way.
Using React with Redux, the state container of which's keys I want to traverse in order to generate children is refreshed everytime the store is changed (as per Redux's immutability concepts).
Thus, in order to take Object.keys(valueFromStore) I used Object.keys(valueFromStore).sort(), so that I at least now have an alphabetical order for the keys.
For a 100% fail-safe solution you could use nested objects and do something like this:
const obj = {};
obj.prop1 = {content: "Foo", index: 0};
obj.prop2 = {content: "Bar", index: 1};
for (let i = 0; i < Object.keys(obj).length; i++)
for (const prop in obj) {
if (obj[prop].index == i) {
console.log(obj[prop].content);
break;
}
}
From the JSON standard:
An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array.
(emphasis mine).
So, no you can't guarantee the order.

What is in the reduce function arguments in CouchDB?

I understand that the reduce function is supposed to somewhat combine the results of the map function but what exactly is passed to the reduce function?
function(keys, values){
// what's in keys?
// what's in values?
}
I tried to explore this in the Futon temporary view builder but all I got were reduce_overflow_errors. So I can't even print the keys or values arguments to try to understand what they look like.
Thanks for your help.
Edit:
My problem is the following. I'm using the temporary view builder of Futon.
I have a set of document representing text files (it's for a script I want to use to make translation of documents easier).
text_file:
id // the id of the text file is its path on the file system
I also have some documents that represent text fragments appearing in the said files, and their position in each file.
text_fragment:
id
file_id // correspond to a text_file document
position
I'd like to get for each text_file, a list of the text fragments that appear in the said file.
Update
Note on JavaScript API change: Prior to Tue, 20 May 2008 (Subversion revision r658405) the function to emit a row to the map index, was named "map". It has now been changed to "emit".
That's the reason why there is mapused instead of emitit was renamed. Sorry I corrected my code to be valid in the recent version of CouchDB.
Edit
I think what you are looking for is a has-many relationship or a join in sql db language. Here is a blog article by Christopher Lenz that describes exactly what your options are for this kind of scenario in CouchDB.
In the last part there is a technique described that you can use for the list you want.
You need a map function of the following format
function(doc) {
if (doc.type == "text_file") {
emit([doc._id, 0], doc);
} else if (doc.type == "text_fragment") {
emit([doc.file_id, 1], doc);
}
}
Now you can query the view in the following way:
my_view?startkey=["text_file_id"]&endkey;=["text_file_id", 2]
This gives you a list of the form
text_file
text_fragement_1
text_fragement_2
..
Old Answer
Directly from the CouchDB Wiki
function (key, values, rereduce) {
return sum(values);
}
Reduce functions are passed three arguments in the order key, values and rereduce
Reduce functions must handle two cases:
When rereduce is false:
key will be an array whose elements are arrays of the form [key,id], where key is a key emitted by the map function and id is that of the document from which the key was generated.
values will be an array of the values emitted for the respective elements in keys
i.e. reduce([ [key1,id1], [key2,id2], [key3,id3] ], [value1,value2,value3], false)
When rereduce is true:
key will be null
values will be an array of values returned by previous calls to the reduce function
i.e. reduce(null, [intermediate1,intermediate2,intermediate3], true)
Reduce functions should return a single value, suitable for both the value field of the final view and as a member of the values array passed to the reduce function.

Resources