CouchDB, MapReduce: query a time slice - couchdb

For a monitoring an application with CouchDB I need to sum up a field of my data (for example the time needed to execute a method that has been logged).
That's no problem for me with map-reduce, but I need to sum up only the data recorded in a special time slice.
Example records:
{_id: 1, methodID:1, recorded: 100, timeneeded: 10},
{_id: 2, methodID:1, recorded: 200, timeneeded: 11},
{_id: 3, methodID:2, recorded: 200, timeneeded: 2},
{_id: 4, methodID:1, recorded: 300, timeneeded: 6},
{_id: 5, methodID:2, recorded: 310, timeneeded: 3},
{_id: 6, methodID:1, recorded: 400, timeneeded: 9}
Now I would like to get just the sum of timeneeded of all records that have been recorded in the range of 200 to 350 and grouped by methodID. (That would be 17 for methodID:1 and 5 for methodID:2.)
How can I do that?
I now tried it with a list function that's using WickedGrey's idea. See my functions here:
map function:
function(doc) {
emit([ doc.recorded], {methodID:doc.methodID, timeneeded:doc.timeneeded});
}
list function:
"function(head, req) {
var combined_values = {};
var row;
while (row = getRow()) {
if( row.values.methodID in combined_values) {
combined_values[ row.values.methodID] +=row.values.timeneeded;
}
else {
combined_values[ row.values.methodID] = row.values.timeneeded;
}
}
for(var methodID in combined_values){
send( toJSON({method: methodID, timeneeded:combined_values[methodID]}) );
}
}"
Now I have to problems:
1. I always get the results as a file and my firefox asks me if I want to download it, instead of viewing it in the browser like when I query a classic view.
2. As I understand the thing, the results are now calculated on the fly, in the list function. I expect this to be not really fast with hundrets of millions of records... Any ideas how to get it faster?
Thank you for your help!
andy

You can't use a map key to filter by one set of criteria, but group by another in CouchDB. However, you can filter the keys by time range, and group with a reduce function. Try something like this:
function map(doc) {
emit(doc.recorded, {doc.methodID: doc.timeneeded});
}
function reduce(key, values, rereduce) {
var combined_values = {};
for (var i in values) {
var totals = values[i];
for (var methodID in totals) {
if (methodID in combined_values) {
combined_values[methodID] += totals[methodID];
}
else {
combined_values[methodID] = totals[methodID];
}
}
}
return combined_values;
}
That should allow you to specify a start/end key, and with group_level=0 should get you a value containing the dictionary that you're looking for.
Edit: Also, this thread might be of interest:
http://couchdb-development.1959287.n2.nabble.com/reduce-limit-error-td2789734.html
It discusses an option to turn off the reduce must shrink message, and further down the list provides other ways of achieving the same goal: using a list function. That might be a better approach that what I've outlined here. :(

function map(doc) {
if(doc.methodID && doc.recorded && doc.timeneeded) {
emit([doc.methodID,doc.recorded], doc.timeneeded);
}
}
//reduce
_sum

Related

How to order by twice with MongoDB, Mongoose, and NodeJS [duplicate]

I am looking to get a random record from a huge collection (100 million records).
What is the fastest and most efficient way to do so?
The data is already there and there are no field in which I can generate a random number and obtain a random row.
Starting with the 3.2 release of MongoDB, you can get N random docs from a collection using the $sample aggregation pipeline operator:
// Get one random document from the mycoll collection.
db.mycoll.aggregate([{ $sample: { size: 1 } }])
If you want to select the random document(s) from a filtered subset of the collection, prepend a $match stage to the pipeline:
// Get one random document matching {a: 10} from the mycoll collection.
db.mycoll.aggregate([
{ $match: { a: 10 } },
{ $sample: { size: 1 } }
])
As noted in the comments, when size is greater than 1, there may be duplicates in the returned document sample.
Do a count of all records, generate a random number between 0 and the count, and then do:
db.yourCollection.find().limit(-1).skip(yourRandomNumber).next()
Update for MongoDB 3.2
3.2 introduced $sample to the aggregation pipeline.
There's also a good blog post on putting it into practice.
For older versions (previous answer)
This was actually a feature request: http://jira.mongodb.org/browse/SERVER-533 but it was filed under "Won't fix."
The cookbook has a very good recipe to select a random document out of a collection: http://cookbook.mongodb.org/patterns/random-attribute/
To paraphrase the recipe, you assign random numbers to your documents:
db.docs.save( { key : 1, ..., random : Math.random() } )
Then select a random document:
rand = Math.random()
result = db.docs.findOne( { key : 2, random : { $gte : rand } } )
if ( result == null ) {
result = db.docs.findOne( { key : 2, random : { $lte : rand } } )
}
Querying with both $gte and $lte is necessary to find the document with a random number nearest rand.
And of course you'll want to index on the random field:
db.docs.ensureIndex( { key : 1, random :1 } )
If you're already querying against an index, simply drop it, append random: 1 to it, and add it again.
You can also use MongoDB's geospatial indexing feature to select the documents 'nearest' to a random number.
First, enable geospatial indexing on a collection:
db.docs.ensureIndex( { random_point: '2d' } )
To create a bunch of documents with random points on the X-axis:
for ( i = 0; i < 10; ++i ) {
db.docs.insert( { key: i, random_point: [Math.random(), 0] } );
}
Then you can get a random document from the collection like this:
db.docs.findOne( { random_point : { $near : [Math.random(), 0] } } )
Or you can retrieve several document nearest to a random point:
db.docs.find( { random_point : { $near : [Math.random(), 0] } } ).limit( 4 )
This requires only one query and no null checks, plus the code is clean, simple and flexible. You could even use the Y-axis of the geopoint to add a second randomness dimension to your query.
The following recipe is a little slower than the mongo cookbook solution (add a random key on every document), but returns more evenly distributed random documents. It's a little less-evenly distributed than the skip( random ) solution, but much faster and more fail-safe in case documents are removed.
function draw(collection, query) {
// query: mongodb query object (optional)
var query = query || { };
query['random'] = { $lte: Math.random() };
var cur = collection.find(query).sort({ rand: -1 });
if (! cur.hasNext()) {
delete query.random;
cur = collection.find(query).sort({ rand: -1 });
}
var doc = cur.next();
doc.random = Math.random();
collection.update({ _id: doc._id }, doc);
return doc;
}
It also requires you to add a random "random" field to your documents so don't forget to add this when you create them : you may need to initialize your collection as shown by Geoffrey
function addRandom(collection) {
collection.find().forEach(function (obj) {
obj.random = Math.random();
collection.save(obj);
});
}
db.eval(addRandom, db.things);
Benchmark results
This method is much faster than the skip() method (of ceejayoz) and generates more uniformly random documents than the "cookbook" method reported by Michael:
For a collection with 1,000,000 elements:
This method takes less than a millisecond on my machine
the skip() method takes 180 ms on average
The cookbook method will cause large numbers of documents to never get picked because their random number does not favor them.
This method will pick all elements evenly over time.
In my benchmark it was only 30% slower than the cookbook method.
the randomness is not 100% perfect but it is very good (and it can be improved if necessary)
This recipe is not perfect - the perfect solution would be a built-in feature as others have noted.
However it should be a good compromise for many purposes.
Here is a way using the default ObjectId values for _id and a little math and logic.
// Get the "min" and "max" timestamp values from the _id in the collection and the
// diff between.
// 4-bytes from a hex string is 8 characters
var min = parseInt(db.collection.find()
.sort({ "_id": 1 }).limit(1).toArray()[0]._id.str.substr(0,8),16)*1000,
max = parseInt(db.collection.find()
.sort({ "_id": -1 })limit(1).toArray()[0]._id.str.substr(0,8),16)*1000,
diff = max - min;
// Get a random value from diff and divide/multiply be 1000 for The "_id" precision:
var random = Math.floor(Math.floor(Math.random(diff)*diff)/1000)*1000;
// Use "random" in the range and pad the hex string to a valid ObjectId
var _id = new ObjectId(((min + random)/1000).toString(16) + "0000000000000000")
// Then query for the single document:
var randomDoc = db.collection.find({ "_id": { "$gte": _id } })
.sort({ "_id": 1 }).limit(1).toArray()[0];
That's the general logic in shell representation and easily adaptable.
So in points:
Find the min and max primary key values in the collection
Generate a random number that falls between the timestamps of those documents.
Add the random number to the minimum value and find the first document that is greater than or equal to that value.
This uses "padding" from the timestamp value in "hex" to form a valid ObjectId value since that is what we are looking for. Using integers as the _id value is essentially simplier but the same basic idea in the points.
Now you can use the aggregate.
Example:
db.users.aggregate(
[ { $sample: { size: 3 } } ]
)
See the doc.
In Python using pymongo:
import random
def get_random_doc():
count = collection.count()
return collection.find()[random.randrange(count)]
Using Python (pymongo), the aggregate function also works.
collection.aggregate([{'$sample': {'size': sample_size }}])
This approach is a lot faster than running a query for a random number (e.g. collection.find([random_int]). This is especially the case for large collections.
it is tough if there is no data there to key off of. what are the _id field? are they mongodb object id's? If so, you could get the highest and lowest values:
lowest = db.coll.find().sort({_id:1}).limit(1).next()._id;
highest = db.coll.find().sort({_id:-1}).limit(1).next()._id;
then if you assume the id's are uniformly distributed (but they aren't, but at least it's a start):
unsigned long long L = first_8_bytes_of(lowest)
unsigned long long H = first_8_bytes_of(highest)
V = (H - L) * random_from_0_to_1();
N = L + V;
oid = N concat random_4_bytes();
randomobj = db.coll.find({_id:{$gte:oid}}).limit(1);
You can pick a random timestamp and search for the first object that was created afterwards.
It will only scan a single document, though it doesn't necessarily give you a uniform distribution.
var randRec = function() {
// replace with your collection
var coll = db.collection
// get unixtime of first and last record
var min = coll.find().sort({_id: 1}).limit(1)[0]._id.getTimestamp() - 0;
var max = coll.find().sort({_id: -1}).limit(1)[0]._id.getTimestamp() - 0;
// allow to pass additional query params
return function(query) {
if (typeof query === 'undefined') query = {}
var randTime = Math.round(Math.random() * (max - min)) + min;
var hexSeconds = Math.floor(randTime / 1000).toString(16);
var id = ObjectId(hexSeconds + "0000000000000000");
query._id = {$gte: id}
return coll.find(query).limit(1)
};
}();
My solution on php:
/**
* Get random docs from Mongo
* #param $collection
* #param $where
* #param $fields
* #param $limit
* #author happy-code
* #url happy-code.com
*/
private function _mongodb_get_random (MongoCollection $collection, $where = array(), $fields = array(), $limit = false) {
// Total docs
$count = $collection->find($where, $fields)->count();
if (!$limit) {
// Get all docs
$limit = $count;
}
$data = array();
for( $i = 0; $i < $limit; $i++ ) {
// Skip documents
$skip = rand(0, ($count-1) );
if ($skip !== 0) {
$doc = $collection->find($where, $fields)->skip($skip)->limit(1)->getNext();
} else {
$doc = $collection->find($where, $fields)->limit(1)->getNext();
}
if (is_array($doc)) {
// Catch document
$data[ $doc['_id']->{'$id'} ] = $doc;
// Ignore current document when making the next iteration
$where['_id']['$nin'][] = $doc['_id'];
}
// Every iteration catch document and decrease in the total number of document
$count--;
}
return $data;
}
In order to get a determinated number of random docs without duplicates:
first get all ids
get size of documents
loop geting random index and skip duplicated
number_of_docs=7
db.collection('preguntas').find({},{_id:1}).toArray(function(err, arr) {
count=arr.length
idsram=[]
rans=[]
while(number_of_docs!=0){
var R = Math.floor(Math.random() * count);
if (rans.indexOf(R) > -1) {
continue
} else {
ans.push(R)
idsram.push(arr[R]._id)
number_of_docs--
}
}
db.collection('preguntas').find({}).toArray(function(err1, doc1) {
if (err1) { console.log(err1); return; }
res.send(doc1)
});
});
The best way in Mongoose is to make an aggregation call with $sample.
However, Mongoose does not apply Mongoose documents to Aggregation - especially not if populate() is to be applied as well.
For getting a "lean" array from the database:
/*
Sample model should be init first
const Sample = mongoose …
*/
const samples = await Sample.aggregate([
{ $match: {} },
{ $sample: { size: 33 } },
]).exec();
console.log(samples); //a lean Array
For getting an array of mongoose documents:
const samples = (
await Sample.aggregate([
{ $match: {} },
{ $sample: { size: 27 } },
{ $project: { _id: 1 } },
]).exec()
).map(v => v._id);
const mongooseSamples = await Sample.find({ _id: { $in: samples } });
console.log(mongooseSamples); //an Array of mongoose documents
I would suggest using map/reduce, where you use the map function to only emit when a random value is above a given probability.
function mapf() {
if(Math.random() <= probability) {
emit(1, this);
}
}
function reducef(key,values) {
return {"documents": values};
}
res = db.questions.mapReduce(mapf, reducef, {"out": {"inline": 1}, "scope": { "probability": 0.5}});
printjson(res.results);
The reducef function above works because only one key ('1') is emitted from the map function.
The value of the "probability" is defined in the "scope", when invoking mapRreduce(...)
Using mapReduce like this should also be usable on a sharded db.
If you want to select exactly n of m documents from the db, you could do it like this:
function mapf() {
if(countSubset == 0) return;
var prob = countSubset / countTotal;
if(Math.random() <= prob) {
emit(1, {"documents": [this]});
countSubset--;
}
countTotal--;
}
function reducef(key,values) {
var newArray = new Array();
for(var i=0; i < values.length; i++) {
newArray = newArray.concat(values[i].documents);
}
return {"documents": newArray};
}
res = db.questions.mapReduce(mapf, reducef, {"out": {"inline": 1}, "scope": {"countTotal": 4, "countSubset": 2}})
printjson(res.results);
Where "countTotal" (m) is the number of documents in the db, and "countSubset" (n) is the number of documents to retrieve.
This approach might give some problems on sharded databases.
You can pick random _id and return corresponding object:
db.collection.count( function(err, count){
db.collection.distinct( "_id" , function( err, result) {
if (err)
res.send(err)
var randomId = result[Math.floor(Math.random() * (count-1))]
db.collection.findOne( { _id: randomId } , function( err, result) {
if (err)
res.send(err)
console.log(result)
})
})
})
Here you dont need to spend space on storing random numbers in collection.
The following aggregation operation randomly selects 3 documents from the collection:
db.users.aggregate(
[ { $sample: { size: 3 } } ]
)
https://docs.mongodb.com/manual/reference/operator/aggregation/sample/
MongoDB now has $rand
To pick n non repeat items, aggregate with { $addFields: { _f: { $rand: {} } } } then $sort by _f and $limit n.
I'd suggest adding a random int field to each object. Then you can just do a
findOne({random_field: {$gte: rand()}})
to pick a random document. Just make sure you ensureIndex({random_field:1})
When I was faced with a similar solution, I backtracked and found that the business request was actually for creating some form of rotation of the inventory being presented. In that case, there are much better options, which have answers from search engines like Solr, not data stores like MongoDB.
In short, with the requirement to "intelligently rotate" content, what we should do instead of a random number across all of the documents is to include a personal q score modifier. To implement this yourself, assuming a small population of users, you can store a document per user that has the productId, impression count, click-through count, last seen date, and whatever other factors the business finds as being meaningful to compute a q score modifier. When retrieving the set to display, typically you request more documents from the data store than requested by the end user, then apply the q score modifier, take the number of records requested by the end user, then randomize the page of results, a tiny set, so simply sort the documents in the application layer (in memory).
If the universe of users is too large, you can categorize users into behavior groups and index by behavior group rather than user.
If the universe of products is small enough, you can create an index per user.
I have found this technique to be much more efficient, but more importantly more effective in creating a relevant, worthwhile experience of using the software solution.
non of the solutions worked well for me. especially when there are many gaps and set is small.
this worked very well for me(in php):
$count = $collection->count($search);
$skip = mt_rand(0, $count - 1);
$result = $collection->find($search)->skip($skip)->limit(1)->getNext();
My PHP/MongoDB sort/order by RANDOM solution. Hope this helps anyone.
Note: I have numeric ID's within my MongoDB collection that refer to a MySQL database record.
First I create an array with 10 randomly generated numbers
$randomNumbers = [];
for($i = 0; $i < 10; $i++){
$randomNumbers[] = rand(0,1000);
}
In my aggregation I use the $addField pipeline operator combined with $arrayElemAt and $mod (modulus). The modulus operator will give me a number from 0 - 9 which I then use to pick a number from the array with random generated numbers.
$aggregate[] = [
'$addFields' => [
'random_sort' => [ '$arrayElemAt' => [ $randomNumbers, [ '$mod' => [ '$my_numeric_mysql_id', 10 ] ] ] ],
],
];
After that you can use the sort Pipeline.
$aggregate[] = [
'$sort' => [
'random_sort' => 1
]
];
My simplest solution to this ...
db.coll.find()
.limit(1)
.skip(Math.floor(Math.random() * 500))
.next()
Where you have at least 500 items on collections
If you have a simple id key, you could store all the id's in an array, and then pick a random id. (Ruby answer):
ids = #coll.find({},fields:{_id:1}).to_a
#coll.find(ids.sample).first
Using Map/Reduce, you can certainly get a random record, just not necessarily very efficiently depending on the size of the resulting filtered collection you end up working with.
I've tested this method with 50,000 documents (the filter reduces it to about 30,000), and it executes in approximately 400ms on an Intel i3 with 16GB ram and a SATA3 HDD...
db.toc_content.mapReduce(
/* map function */
function() { emit( 1, this._id ); },
/* reduce function */
function(k,v) {
var r = Math.floor((Math.random()*v.length));
return v[r];
},
/* options */
{
out: { inline: 1 },
/* Filter the collection to "A"ctive documents */
query: { status: "A" }
}
);
The Map function simply creates an array of the id's of all documents that match the query. In my case I tested this with approximately 30,000 out of the 50,000 possible documents.
The Reduce function simply picks a random integer between 0 and the number of items (-1) in the array, and then returns that _id from the array.
400ms sounds like a long time, and it really is, if you had fifty million records instead of fifty thousand, this may increase the overhead to the point where it becomes unusable in multi-user situations.
There is an open issue for MongoDB to include this feature in the core... https://jira.mongodb.org/browse/SERVER-533
If this "random" selection was built into an index-lookup instead of collecting ids into an array and then selecting one, this would help incredibly. (go vote it up!)
This works nice, it's fast, works with multiple documents and doesn't require populating rand field, which will eventually populate itself:
add index to .rand field on your collection
use find and refresh, something like:
// Install packages:
// npm install mongodb async
// Add index in mongo:
// db.ensureIndex('mycollection', { rand: 1 })
var mongodb = require('mongodb')
var async = require('async')
// Find n random documents by using "rand" field.
function findAndRefreshRand (collection, n, fields, done) {
var result = []
var rand = Math.random()
// Append documents to the result based on criteria and options, if options.limit is 0 skip the call.
var appender = function (criteria, options, done) {
return function (done) {
if (options.limit > 0) {
collection.find(criteria, fields, options).toArray(
function (err, docs) {
if (!err && Array.isArray(docs)) {
Array.prototype.push.apply(result, docs)
}
done(err)
}
)
} else {
async.nextTick(done)
}
}
}
async.series([
// Fetch docs with unitialized .rand.
// NOTE: You can comment out this step if all docs have initialized .rand = Math.random()
appender({ rand: { $exists: false } }, { limit: n - result.length }),
// Fetch on one side of random number.
appender({ rand: { $gte: rand } }, { sort: { rand: 1 }, limit: n - result.length }),
// Continue fetch on the other side.
appender({ rand: { $lt: rand } }, { sort: { rand: -1 }, limit: n - result.length }),
// Refresh fetched docs, if any.
function (done) {
if (result.length > 0) {
var batch = collection.initializeUnorderedBulkOp({ w: 0 })
for (var i = 0; i < result.length; ++i) {
batch.find({ _id: result[i]._id }).updateOne({ rand: Math.random() })
}
batch.execute(done)
} else {
async.nextTick(done)
}
}
], function (err) {
done(err, result)
})
}
// Example usage
mongodb.MongoClient.connect('mongodb://localhost:27017/core-development', function (err, db) {
if (!err) {
findAndRefreshRand(db.collection('profiles'), 1024, { _id: true, rand: true }, function (err, result) {
if (!err) {
console.log(result)
} else {
console.error(err)
}
db.close()
})
} else {
console.error(err)
}
})
ps. How to find random records in mongodb question is marked as duplicate of this question. The difference is that this question asks explicitly about single record as the other one explicitly about getting random documents.
For me, I wanted to get the same records, in a random order, so I created an empty array used to sort, then generated random numbers between one and 7( I have seven fields). So each time I get a different value, I assign a different random sort.
It is 'layman' but it worked for me.
//generate random number
const randomval = some random value;
//declare sort array and initialize to empty
const sort = [];
//write a conditional if else to get to decide which sort to use
if(randomval == 1)
{
sort.push(...['createdAt',1]);
}
else if(randomval == 2)
{
sort.push(...['_id',1]);
}
....
else if(randomval == n)
{
sort.push(...['n',1]);
}
If you're using mongoid, the document-to-object wrapper, you can do the following in
Ruby. (Assuming your model is User)
User.all.to_a[rand(User.count)]
In my .irbrc, I have
def rando klass
klass.all.to_a[rand(klass.count)]
end
so in rails console, I can do, for example,
rando User
rando Article
to get documents randomly from any collection.
you can also use shuffle-array after executing your query
var shuffle = require('shuffle-array');
Accounts.find(qry,function(err,results_array){
newIndexArr=shuffle(results_array);
What works efficiently and reliably is this:
Add a field called "random" to each document and assign a random value to it, add an index for the random field and proceed as follows:
Let's assume we have a collection of web links called "links" and we want a random link from it:
link = db.links.find().sort({random: 1}).limit(1)[0]
To ensure the same link won't pop up a second time, update its random field with a new random number:
db.links.update({random: Math.random()}, link)

NodeJS Iterate through City in JSON, Return Cities and Users in each City

I have the below snippet from a JSON Object that has 3,500 records in it.
[
{
"use:firstName": "Bob",
"use:lastName": "Smith",
"use:categoryId": 36,
"use:company": "BobSmith",
"use:webExId": "Bob.Smith#email.com",
"use:address": {
"com:addressType": "PERSONAL",
"com:city": "US-TX",
"com:country": 1
}
},
{
"use:firstName": "Jane",
"use:lastName": "Doe",
"use:categoryId": 36,
"use:webExId": "Jane.Doe#email.com",
"use:address": {
"com:addressType": "PERSONAL",
"com:city": "US-CA",
"com:country": "1_1"
}
}
{
"use:firstName": "Sam",
"use:lastName": "Sneed",
"use:categoryId": 36,
"use:webExId": "Sam.Sneed#email.com",
"use:address": {
"com:addressType": "PERSONAL",
"com:city": "US-CA",
"com:country": "1_1"
}
}
]
I am using NodeJS and I have been stuck on figuring out the best way to:
1. Iterate through ['use:address']['com:city' to map out and identify all of the Cities. (In the example above, I have two: US-TX and US-CA in the three records provided)
2. Then identify how many records match each City (In the example above, I would have US-TX: 1 and US-CA: 2)
The only code I have is the easy part which is doing a forEach loop through the JSON data, defining userCity variable (to make it easier for me) and then logging to console the results (which is really unnecessary but I did it to confirm I was looping through JSON properly).
function test() {
const webexSiteUserListJson = fs.readFileSync('./src/db/webexSiteUserDetail.json');
const webexSiteUsers = JSON.parse(webexSiteUserListJson);
webexSiteUsers.forEach((userDetails) => {
let userCity = userDetails['use:address']['com:city'];
console.log(userCity);
})
};
I've been searching endlessly for help on the topic and probably not formulating my question properly. Any suggestions are appreciated on how to:
1. Iterate through ['use:address']['com:city' to map out and identify all of the Cities.
2. Then identify how many records match each City (In the example above, I would have US-TX: 1 and US-CA: 2)
Thank you!
You could reduce the webexSiteUsers array into an object that is keyed by city, where each value is the number of times the city occurs. Something like the below should work.
const counts = webexSiteUsers.reduce((countMemo, userDetails) => {
let userCity = userDetails['use:address']['com:city'];
if (countMemo[userCity]) {
countMemo[userCity] = countMemo[userCity] + 1;
} else {
countMemo[userCity] = 1;
}
return countMemo;
}, {});
counts will then be an object that looks like this.
{
"US-TX": 1,
"US-CA": 2
}

Couchbase & nodejs: View query with range, order and limited results

I am new to couchbase and I'm trying to understand how filtering, ordering and limiting results in a view work together.
Couchbase version: 3.0.1
I'm using nodejs as the SDK.
I have a map function like this
function (doc, meta) {
if (doc.type !== 'item' || !doc.category) {
return;
}
emit([doc.orderId, doc.category.id, doc.number], null);
}
And an item document that looks like this
{
"id": 1,
"type": "item",
"number": 1203,
"orderId": 2,
"category": {
"id": 10,
"title": "Carpet"
}
}
I would like to filter only items with orderId = 2 and category.id = 10, all this ordered by number descending. Because I have a paginator, I would like to display 20 items per page. I have thousands of items in the database.
With the query below, I have an error because of the order call. If I comment it, I find the results, filtered, limited and ordered by default by number ascending.
var order_id = 2,
category_id = 10,
limit = 20,
skip = 0,
range = [order_id, category_id],
// suppose we have a valid couchbase connexion and a viewQuery object
query = viewQuery.from('items', 'myView')
.limit(limit)
.skip(skip)
.order(2) // 2 = DESC. This line doesn't work
.include_docs(true)
.range(range, range.concat([{}]), true);
bucket.query(query, function (err, docs) {
console.log(err);
console.log(docs);
});
The error says:
Error: query_parse_error: No rows can match your key range, reverse your start_key and end_key or set descending=false
Note that if I order ASC, the error occurs too. I have to remove the call to the .order() function to have my view behave properly.
Does anyone knows why?
Thanks
When you order your query in descending order, you have to swap the order of the start and end keys as well (the parameters to the range method.)

Node, MongoDB (mongoose) distinct count

I have a collection with multiple documents and every one of them has and 'eID' field that is not unique. I want to get the count for all the distinct 'eID'.
Example: if there are 5 documents with the 'eID' = ObjectID(123) and 2 documents with 'eID' = ObjectID(321) I want to output something like:
{
ObjectID(123): 5,
ObjectID(321): 2
}
I don't know if that can be done in the same query but after knowing what are the most ocurring eID's I want to fetch the referenced documents using the ObjectID
Mongoose version 3.8.8
$status is the specific field of collection that i need to count distinct number of element.
var agg = [
{$group: {
_id: "$status",
total: {$sum: 1}
}}
];
model.Site.aggregate(agg, function(err, logs){
if (err) { return res.json(err); }
return res.json(logs);
});
//output
[
{
"_id": "plan",
"total": 3
},
{
"_id": "complete",
"total": 4
},
{
"_id": "hault",
"total": 2
},
{
"_id": "incomplete",
"total": 4
}
]
This answer is not in terms of how this query can be written via mongoose, but I am familiar with the nodejs MongoClient class if you have further questions regarding implementation.
The best (most optimal) way I can think of doing this is to use mapReduce or aggregation on your database. The closest thing to a single command would be the distinct command, which can be invoked on collections, but this will only give you an array of distinct values for the eID key.
See here: http://docs.mongodb.org/manual/core/map-reduce/
For your specific problem, you will want your map and reduce functions roughly as follows:
var map = function() {
var value = 1;
emit(this.eID, value);
};
var reduce = function(key, values) {
var result = 0;
for(var i=-1;++i<values.length;){
var value = values[i];
result += value;
};
return result;
};
There might be an easier way to do this using the aggregation pipeline (I would post the link but I don't have enough reputation).
I also found the mapReduce command for mongoose: http://mongoosejs.com/docs/api.html#model_Model.mapReduce

Creating a couchdb view that returns an array of unique values from a document set

I have a couchdb database filled with time-stamped documents so the format of a given document is something like this:
{ id: "uniqueid",
year: 2011,
month: 3,
day: 31,
foo: "whatever"
bar: "something else"
}
I would like to construct a set of views such that a given key will return an array of year, month or day values for which documents exist. For example given the view name Days, I would like the following view url
/db/_design/designdoc/_view/Days?key=[2011,3]
to return an array of all the days in March of 2011 for which documents exist. For example, if March 2011 had some number of documents falling on six days, it might look like:
[1, 2, 5, 15, 27, 31]
Similarly,
/db/_design/designdoc/_view/Months?key=2011
If 2011 had some number of documents falling on April, May, and September, it might look like:
[4, 5, 9]
And
/db/_design/designdoc/_view/Years
will return an array of years in the whole database. If the documents have this year and last, it might look like:
[2010, 2011]
I gather it is difficult to write a reduce function that returns an array because you end up running into reduce overflow errors as the document count increases. I know this because I wrote a reduce function that worked but then started throwing reduce overflow errors after I loaded it up with documents.
One solution I have examined is just creating a view without a reduce that creates an array key [year, month, day] and then using startkey and endkey parameters on the view to return documents. The problem with this approach is how it scales. Say my database has thousands of documents spread out over two years. Using this view, I need to iterate over the entire set of documents just to discover this.
I believe this question is trying to ask the same thing though I am not quite sure so I figured I'd add a new question. Also, the answers given on that question do not avoid reduce overflow errors for larger document sets, as far as I could tell with my limited view writing skills.
I think for this, ou need to construct your views not only with maps, but also with reduces.
Disregarding eventual scaling problems there are 2 solutions. I will take into account only Days since the answer for Months and Years is similar.
Solution 1:
view Days:
map:
function(doc) {
if (doc.year && doc. month && doc.day) {
emit([ year, month, day ], 1);
}
}
reduce:
function(keys, values) {
return sum(values);
}
list listDays:
function(head, req) {
start({
"headers": {
"Content-Type": "text/plain"
}
});
var row;
var days = new Array();
while(row = getRow()) {
days.push(row.key[2]);
}
var daysString = json.join(',');
send('[' + daysString + ']');
}
http call:
http://couch/db/_design/db/_list/listDays/Days?group=true&group_level=2&startkey=["2011","3"]&endkey=["2011","3Z"]
Solution 2:
view Days:
map:
function(doc) {
if (doc.year && doc. month && doc.day) {
emit([ year, month, day ], null);
}
}
list listDays:
function(head, req) {
start({
"headers": {
"Content-Type": "text/plain"
}
});
var row;
var days = new Array();
while(row = getRow()) {
if (days.indexOf(row.key[2] == -1) { days.push(row.key[2]); }
}
var daysString = json.join(',');
send('[' + daysString + ']');
}
http call:
http://couch/db/_design/db/_list/listDays/Days?startkey=["2011","3"]&endkey=["2011","3Z"]

Resources