With a couchdb view, we get results ordered by key. I have been using this to get values associated with a highest number. For example, take this result (in key: value form):
{1:'sam'}
{2:'jim'}
{4:'joan'}
{5:'jill'}
couchDB will sort those according to the key. (It could be helpful to think of the key as the "score".) I want to find out who has the highest or lowest score.
I have written a reduce function like so:
function(keys, values) {
var len = values.length;
return values[len - 1];
}
I know there's _stat and the like, but these are not possible in my application (this is a slimmed down, hypothetical example).
Usually when I run this reduce, i will get either 'sam' or 'jill' depending on whether descending is set. This is what I want. However, in large data-sets, sometimes I get someone from the middle of the list.
I suspect this is happening on rereduce. I had assumed that when rereduce has been run, the order of results is preserved. However, I can find no assurances that this is the case. I know that on rereduce, the key is null, so by the normal sorting rules they would not be sorted. Is this the case?
If so, any advice on how to get my highest scorer?
Yeah, I don't think sorting order is guaranteed, probably because it cannot be guaranteed in clustered environments. I suspect the way you're using map/reduce here is a little iffy, but you should post your view code if you really want a good answer here.
Related
For example, if I have n keys and m slots in the hash map, the average size of a linked list starting from a slot would be n/m. Am I correct in thinking this? Again, I'm talking about an average. Thanks in advance!
I'm trying to learn data structures.
As you say, the average size of a single list is generally going to be the table's load factor; but this is assuming that the "Simple Uniform Hashing Assumption" holds with your hash table (more specifically, with its hash function(s) and expected input keys): simply put, we assume that the hash function distributes elements to buckets uniformly, as well as independently of one another.
To expand a little, and in different words:
We assume that if we choose a new item randomly (imagine sampling an item from the probability distribution that characterizes our inputs), then there is an equal chance that the item we end up with will be mapped to any of the m buckets. (A chance of 1/m.)
Furthermore, that this probability is unaffected given the presence (or absence) of any other elements in any of the buckets.
This is helpful because from this we can conclude that the probability for an item to be sorted into a given bucket is always 1/m, regardless of any other circumstances; from this it directly follows that the expected (average) length of a single bucket's list will be n/m (we insert n elements into the table, and for each one, sort it into this given list at a probability of 1/m).
To see that this is important, we might imagine a case in which it doesn't hold: for instance, if we're facing some kind of "attack" and our inputs are engineered to all hash into the same bucket, or even just with a high probability. In this case SUHA no longer holds, and clearly neither does the link you've asked about between the length of a list and the load factor.
This is part of the reason that it is important to choose a good hash function for your use case: without it, the assumption may not hold which could have a harmful effect on your lookup times.
Problem we are trying to solve:
Give a list of Keys, what is the best way to get the value from IMap given the number of entries is around 500K?
Also we need to filter the values based on fields.
Here is the example map we are trying to read from.
Given IMap[String, Object]
We are using protobuf to serialize the object
Object can be say
Message test{ Required mac_address eth_mac = 1, ….// size can be around 300 bytes }
You can use IMap.getAll(keySet) if you know the keys beforehand. It's much better than single gets since it'll be much less network trips in a bulk operation.
For filtering, you can use predicates on IMap.values(predicate), IMap.entryset(predicate) or IMap.keyset(predicate) based on what you want to filter.
See more: http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#distributed-query
CouchDB's map functions emit key/value pairs:
function(doc) {
emit(doc.date, 1);
}
Potentially, there could be many key/value pairs with the same key. Setting group=true while querying a view groups key/value pairs with the same key into the same reduce:
function(keys, values, rereduce) {
return sum(values);
}
Does this mean that with group=true (or for any group_level > 0), there will be exactly one reduce per key?
Or does the grouping only guarantee that all reduces will have homogeneous keys, and that there could still be one or more rereduces?
I am working with a reduce function that is not commutative, but which will not have a large number of records per key. I was hoping that I would be able to set group=true and then control the order of operation within a single reduce. If there will be rereduces, then that plan does not make sense.
group=true roughly means "Hey, Couch! Group this map in the way there all keys will be distinct, but don't miss any case of them!" and actually equals to group_level=999 (see docs).
While you may not guess with proper group_level and strip some key items (if key is an array it makes sense), group takes care of this for you and rereduce wouldn't be applied.
Also, your reduce function could be replaced with the built-in _sum - it's implemented in Erlang and is much faster.
I have some documents with a "status" field of "Green", "Red", "Amber".
I'm sure it's possible to use MapReduce to produce a grouped response containing three keys (one for each status), each with a value containing an array of all the documents with that key. However, I'm struggling on how to use re(reduce) functions.
Map function:
function(doc) {
emit(doc.status, doc);
}
Reduce function: ???
This is not a problem that reduce is intended to solve; reduce in CouchDB is for aggregation.
If I understand you correctly, you want this;
Map:
function(doc) {
for (var i in doc.status) {
emit(doc.status[i], null);
}
}
You can then find all docs of status Green with;
/_design/foo/_view/bar?key="Green"&include_docs=true
This will return a list of all docs with that status. If you wish to find docs of more than one status in a single query, then use http POST with a body of this form;
{"keys":["Green", "Red"]}
HTH,
B.
Generally speaking, you will not use a reduce function to obtain your list of documents. A reduce is meant to take a list, and reduce it to a single value. In fact, there is an upper limit to the size of a reduce value anyways, and using entire documents will trigger a reduce_overflow error. Examples of reduces are counts, sums, averages, etc. Stick with the map query, and you will have your values collated and sorted by the status value.
On another, possibly unrelated note, I would not emit the document with your view. You can just use the include_docs view query parameter, and achieve the same effect, while saving disk-space in the process. The trade-off is that internally the doc will have to be retrieved one-by-one. (but since they're indexed already by _id anyways, it's usually a negligible difference.
From http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views
The couchdb reduce function is defined as
function (key, values, rereduce) {
return sum(values);
}
key will be an array whose elements
are arrays of the form [key,id]
values will be an array of the values
emitted for the respective elements
in keys
i.e. reduce([ [key1,id1], [key2,id2], [key3,id3] ], [value1,value2,value3], false)
I am having trouble understanding when/why the array of keys would contain different key values. If the array of keys does contain different key values, how would I deal with it?
As an example, assume that my database contains movements between accounts of the form.
{"amount":100, "CreditAccount":"account_number", "DebitAccount":"account_number"}
I want a view that gives the balance of an account.
My map function does:
emit( doc.CreditAccount, doc.amount )
emit( doc.DebitAccount, -doc.amount )
My reduce function does:
return sum(values);
I seem to get the expected results, however I can't reconcile this with the possibility that my reduce function gets different key values.
Is my reduce function supposed to group key values first? What kind of result would I return in that case?
By default, Futon "groups" your results, which means you get a fresh reduce per key—in your case, an account. The group feature is for exactly this situation.
Over the raw HTTP API, you will get one total reduce for all accounts which is probably not useful. So remember to use group=true in your own application to be sure you get summaries per account.