If I was to make a get request, I'd do something like:
https://myserver.com/sometestdb/_design/sortJob/_view/index?limit=100&reduce=false&startkey=["job_price"]&endkey=["job_price", {}]
For a map query like:
function(doc) {
if (doc.data.type === "job") {
emit(["job_ref", doc.data.ref], null);
emit(["job_price", doc.data.price], null);
}
}
How would I replicate the query using pouchDb query? I've tried a few things around the start and end keys but no luck:
{
include_docs: true,
startkey: 'job_price',
endkey: 'job_price,{}'
}
{
include_docs: true,
startkey: 'job_price',
endkey: 'job_price\uffff'
}
Both of these return 0 results whereas the link I use produces the expected results.
Note: I can confirm the data is present in my pouchDB as I've queried it using the pouch-find plugin but am trying various techniques to see which is faster.
EDIT: According to the complex keys section in the docs, I should be able to do the following:
{
include_docs: true,
startkey: '[\'job_price\']',
endkey: '[\'job_price\',{}]'
}
But that results in:
No rows can match your key range, reverse your start_key and end_key
or set {descending : true}
But I should be able to get results like this and don't want descending: true.
Ok, so it was my reading of the documentation that was off.
When building the start / end key, you need to pass the array, not pass the array as a string (which I thought pouchDB then eval'd.
This is the working query:
{
include_docs: true,
startkey: ['job_price'],
endkey: ['job_price', {}]
}
Posting this answer rather than deleting the question as it might help someone else.
Related
I am struggeling to get pagination working for a few days now. I have a database with docs and have a few to have the timestamp as key to sort descending. But I can't seem to get the next set of docs ...
If I run it, I get the top 5 docs. When I try to use startkey or startkey_docid I only seem to get the same lines again.
Trying the couch documentation I am not sure what I need to make it work.
couchdb has a design like:
{
"_id": "_design/filters",
"views": {
"blog": {
"map": "function (doc) { if (doc.published && doc.type == 'post') emit(doc.header.date); }"
}
}
}
... header.date is generated with +new Date()
on the nodejs side, with github/nano, I use something similar to:
import nano from 'nano';
let db = nano(SERVICE_URL).use('blog_main');
let lastDocId = ctx.query.lastDocId;
let lastSkip = ctx.query.lastSkip ? +ctx.query.lastSkip + 5 : null;
let query = {
limit: 1 + 4, // limit to 5
descending: true, // reverse order: newest to top
include_docs: true,
}
if (lastDocId) { // initally off
query.startkey = lastDocId;
}
if (lastSkip) { // other method for tests
query.skip = lastSkip; // ----> this results in some previous and some new items
}
let itemRows = await db.view('filters','blog', query);
let items = itemRows.rows;
// each doc is in items[].doc
I have seen sort by value, sorting works for me - but I cant seem to get pagination to work.
I'm uncertain regarding the statement "I get the same lines again". That is reproducible if startkey is the first rather than the last key of the prior result - and that would be the first problem.
Regardless, assuming startkey is correct the parameters skip and startkey are conflicting. Initially skip should be 0 and afterwards it should be 1 in order to skip over startkey in successive queries.
This technique is clearly outlined in the CouchDB pagination documentation1.
Details
Assume the complete view (where key is a unix timestamp) is
{
"total_rows":7,
"offset":0,
"rows":[
{"id":"821985c5140ca583e108653fb6091ac8","key":1580050872331,"value":null},
{"id":"821985c5140ca583e108653fb6092c3b","key":1580050872332,"value":null},
{"id":"821985c5140ca583e108653fb6093f47","key":1580050872333,"value":null},
{"id":"821985c5140ca583e108653fb6094309","key":1580050872334,"value":null},
{"id":"821985c5140ca583e108653fb6094463","key":1580050872335,"value":null},
{"id":"821985c5140ca583e108653fb60945f4","key":1580050872336,"value":null},
{"id":"821985c5140ca583e108653fb60949f3","key":1580050872339,"value":null}
]
}
Given the initial query conditions
{
limit: 5,
descending: true,
include_docs: false // for brevity
}
indeed produces the expected result, 5 rows with the most recent first
{
"total_rows":7,
"offset":0,
"rows":[
{"id":"821985c5140ca583e108653fb60949f3","key":1580050872339,"value":null},
{"id":"821985c5140ca583e108653fb60945f4","key":1580050872336,"value":null},
{"id":"821985c5140ca583e108653fb6094463","key":1580050872335,"value":null},
{"id":"821985c5140ca583e108653fb6094309","key":1580050872334,"value":null},
{"id":"821985c5140ca583e108653fb6093f47","key":1580050872333,"value":null}
]
}
Now assuming the second query is so
{
limit: 5,
descending: true,
include_docs: false, // for brevity
startkey: 1580050872333,
skip: 5
}
startkey (the key of the last row of the prior result) is correct but the skip parameter is literally skipping past the next (logical) set of rows. Specifically with those parameters and the example view above, the query would blow past the remaining keys resulting in an empty row set.
This is what is desired:
{
limit: 5,
descending: true,
include_docs: false, // for brevity
startkey: 1580050872333,
skip: 1 // just skip the last doc (startkey)
}
1 CouchDB Pagination Recipes 3.2.5.5. Paging (Alternate Method)
Using startkey or skip, returned results that included some of the skipped results as well, or all previous ones (strangely mixed up).
I solved it, by extending the result keys with - second part.
Since the key was based on a date without time, it seemed to have rearranged the entries on each request due to similar date-timestamps. Adding a second part that was was sortable as well (used the created timestamp as second part) fixed it.
The key is now [datetimestamp, createdtimestamp] .. both can be sorted descending.
I have a model named Rest with lots of columns and i only want to fetch few among them while applying near operator on that Rest model. this is my code
Rest
.select('rest_status rest_address rest_name rest_contact rest_photo rest_menu rest_avg_rating')
.aggregate().near({
near:[parseFloat(req.body.lng),parseFloat(req.body.lat)],
maxDistance:100000,
spherical:true,
distanceField:"dist.calculated"
})
.then(rests =>{
// const response=[];
// for(const rest of rests){
// console.log(rest);
// response.push(rest);
// }
res.send({rests,response_status});
}).catch(err => res.send(err));
when i try like this . i get an error that select is not a function. i tried changing select position like below aggregate and near but it didnt work. I'm new to this mongoose,please tell me if there is any function or way around to fetch limited columns from my model.
i forgot to mention both near and select and working fine when other one is not used and also please help me with changing the data obtain from model
You could do something like this below:
Rest.aggregate([
{
$geoNear: {
near: {
type: "Point",
coordinates: [parseFloat(req.body.lng), parseFloat(req.body.lat)]
},
maxDistance: 100000,
spherical: true,
distanceField: "dist.calculated"
}
},
{
$project: {
rest_status: 1,
rest_address: 1,
rest_name: 1,
rest_contact: 1,
rest_photo: 1,
rest_menu: 1,
rest_avg_rating: 1
}
}
//even you can add limit skip below , if u need
,{$limit:<Number>},
{ $skip: <Number>}
]
//depends on your mongo version you may need to set cursor as well.
,
{ cursor: { batchSize: <Number or keep 0> } }
)
.then()
.catch();
Note: before execute the query make sure you have added 2dsphere index to your dist.calculated field
in case of aggegate $project is used for what you want to do with .select
instead of using .near() i have used $geoNear.
if you want to use limit skip you can follow the example, other wise can remove that.
you also can add distanceMultiplier and includeLocs field depends on your requirement.
in the above, depends on the mongoDB version you may need to use cursor in aggregate.
if not you can go aheadwithout using the cursor.
Hope this helps.
if you still get any error, please comment.
Assume there's a collection with documents where field_1 is unique
[
{
field_1: 'abc',
field_2: 0,
field_3: []
}
]
I want to add another document, but then field_1 is the same 'abc'. In which case I want to increment field_2, and append element into field_3 while updating. And if field_1 is different, create another document.
What is the best way to approach such queries? My first thought was to search, and then insert if no documents are found, or is there a better way? The problem with this approach is, if I'm inserting multiple documents at once, I can't use 'search and, if no doc found, insert' approach effectively.
Mongoose now supports this natively with findOneAndUpdate (calls MongoDB findAndModify).
The upsert = true option creates the object if it doesn't exist. defaults to false.
MyModel.findOneAndUpdate(
{foo: 'bar'}, // find a document with that filter
modelDoc, // document to insert when nothing was found
{upsert: true, new: true, runValidators: true}, // options
function (err, doc) { // callback
if (err) {
// handle error
} else {
// handle document
}
}
);
If the uniqueness of field_1 is enforced by unique index, you can use a kind of optimistic locking.
First you try to update:
db.collection.update(
{
field_1: 'abc'
},
{
$inc: {field_2: 1},
$push: {field_3: 'abc'},
}
);
and check result of the operation - if 1 document updated, no more actions required. Otherwise, it's the first document with field_1 == 'abc', so you try to insert it:
db.collection.insert(
{
field_1: 'abc',
field_2: 0,
field_3: []
}
);
and catch the error. If there is no error, no more actions required. Otherwise there was a concurrent insert, so you need to repeat the update query once more.
Let's say I have 1000 items like this:
{ "_id": "12345", "type": "item", "name": "whatever", "timestamp": 1481659373 }
And I have a view that grabs only a specific type.
View
function (doc) { emit([doc.type], doc._id); }
Parameters
startkey: ["item"]
endkey: ["item"]
include_docs: true
I know I can use offset and limit for pagination. But I'm trying to grab the most recent timestamps first in descending order. I see that there is a descending option but it looks like you only set it to true so I'm not sure of its functionality.
Does anyone have any guidance on how I could do accomplish this?
If you modify your Map function to read:
function (doc) { emit([doc.type, doc.timestamp], doc._id); }
Then the keys in the view will be sorted by type and then by timestamp.
We can then query the view like so:
startkey: ["itemz"]
endkey: ["item"]
descending: true
include_docs: true
To get the most recent docs first. The descending flag indicates you want the items in reverse order, but you also have to ensure that your startkey has large value.
I create an index using the below function:
function (doc) {
if(doc.type === 'Property') {
if(doc.Beds_Max) {
try {
index("Beds_Max", parseInt(doc.Beds_Max));
}
catch(err) {
//ooopss
}
}
if(doc.YearBuilt) {
try {
index("YearBuilt", parseInt(doc.YearBuilt));
}
catch(err) {
//ooopss
}
}
}
}
using the cloudant Design Documents -> New Search Index and after the index is built I can issue queries like
"YearBuilt": [2010 TO Infinity]
But if I try to query the same index using Cloudant query I see weird behavior. If I go to Cloudant Dashboard -> Query and pass something like
{"limit": 5,
"selector": {
"_id": {
"$gt": null
},
"Beds_Max": {"$gte": 7}
},
fields: ["_id"]}
I see huge spike in data transmission, it keeps on receiving huge amounts of data even for the most unusual queries which are only supposed to return no more than 1 or 2 results and then hangs my computer so that most probably is not right. When I use Pouchdb-find npm module which has support for Cloudant 2.0 Query and issue the same selector as above I see inconsistent behavior, e.g. sometimes it returns 0 rows and sometimes it gives a ETIMEOUTERROR. If I change the index and exclude parseInt I can query using the same Pouchdb-find and even Cloudant Dashboard-> Query and get the results but in that case I lose the ability to use inequality operators which is a no go for me.
I'm open to work-arounds and even altogether different features to achieve the desired result.