FourSquare explore API doesn't work with offest - foursquare

I am trying to hit the ends point 'https://api.foursquare.com/v2/venues/explore' with the initial parameters
{v='20180323',
near='SFO',
radius = 100000,
section='topPicks',
limit=20 }
and later on by adding offset = 20 to the above dictionary. I am expecting results from 21-40 in the second response, but I am getting the same response as my first query.

Related

Nodejs compute gets slow after query big list from Mongodb

I am using mongoose to query a really big list from Mongodb
const chat_list = await chat_model.find({}).sort({uuid: 1}); // uuid is a index
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1});// create_time is a index of message collection, time: t1
// chat_list length is around 2,000, msg_list length is around 90,000
compute(chat_list, msg_list); // time: t2
function compute(chat_list, msg_list) {
for (let i = 0, len = chat_list.length; i < len; i++) {
msg_list.filter(msg => msg.uuid === chat_list[i].uuid)
// consistent handling for every message
}
}
for above code, t1 is about 46s, t2 is about 150s
t2 is really to big, so weird.
then I cached these list to local json file,
const chat_list = require('./chat-list.json');
const msg_list = require('./msg-list.json');
compute(chat_list, msg_list); // time: t2
this time, t2 is around 10s.
so, here comes the question, 150 seconds vs 10 seconds, why? what happened?
I tried to use worker to do the compute step after mongo query, but the time is still much bigger than 10s
The mongodb query returns a FindCursor that includes arrayish methods like .filter() but the result is not an Array.
Use .toArray() on the cursor before filtering to process the mongodb result set like for like. That might not make the overall process any faster, as the result set still needs to be fetched from mongodb, but compute will be similar.
const chat_list = await chat_model
.find({})
.sort({uuid: 1})
.toArray()
const msg_list = await message_model
.find({}, {content: 1, xxx})
.sort({create_time: 1})
.toArray()
Matt typed faster than I did, so some of what was suggested aligns with part of this answer.
I think you are measuring and comparing something different than what you are expecting and implying.
Your expectation is that the compute() function takes around 10 seconds once all of the data is loaded by the application. This is (mostly) demonstrated by your second test, apart from the fact that that test includes the time it takes to load the data from the local files. But you're seeing that there is a difference of 104 seconds (150 - 46) between the completion of message_model.find() and compute() hence leading to the question.
The key thing is that successfully advancing from the find against message_model is not the same thing as retrieving all of the results. As #Matt notes, the find() will return with a cursor object once the initial batch of results are ready. That is very different than retrieving all of the results. So there is more work (apparently ~94 seconds worth) left to do from the two find() operations to further iterate the cursors and retrieve the rest of the results. This additional time is getting reported inside of t2.
Ass suggested by #Matt, calling .toArray() should shift that time back into t1 as you are expecting. Also sounds like it may be more correct due to ambiguity with .filter() functions.
There are two other things that catch my attention. The first is: why are you retrieving all of this data client-side to do the filtering there? Perhaps you would like to do this uuid matching inside of the database via $lookup?
Secondly, this comment isn't clear to me:
// create_time is a index of message collection, time: t1
create_time itself is a field here, existent or not, that you are requesting an ascending sort against.
You are taking data from 2 tables, then with for loop you are comparing ID using filter function, what is happening now is your loop will be executed 2000 time and so the filter function also which contains 90000 records.
So take a worst case scenario here lets consider 2000 uuid you are getting is not inside the msg_list, here you are executing loop 2000*90000 even though you are not getting data.
It wan't take more than 10 to 15 secs if use below code.
//This will generate array of uuid present in message_model
const msg_list = await message_model.find({}, {content: 1, xxx}).sort({create_time: 1}).distinct("uuid");
// Below query will match all uuid present in msg_list array with chat_list UUID
const chat_list = await chat_model.find({uuid:{$in:msg_list}}).sort({uuid: 1});
The above result is doing same as you have done in your code with filter function and loop but this is proper and fastest way to receive the data you required.

How to power a windowed virtual list with cursor based pagination?

Take a windowed virtual list with the capability of loading an arbitrary range of rows at any point in the list, such as in this following example.
The virtual list provides a callback that is called anytime the user scrolls to some rows that have not been fetched from the backend yet, and provides the start and stop indexes, so that, in an offset based pagination endpoint, I can fetch the required items without fetching any unnecessary data.
const loadMoreItems = (startIndex, stopIndex) => {
fetch(`/items?offset=${startIndex}&limit=${stopIndex - startIndex}`);
}
I'd like to replace my offset based pagination with a cursor based one, but I can't figure out how to reproduce the above logic with it.
The main issue is that I feel like I will need to download all the items before startIndex in order to receive the cursor needed to fetch the items between startIndex and stopIndex.
What's the correct way to approach this?
After some investigation I found what seems to be the way MongoDB approaches the problem:
https://docs.mongodb.com/manual/reference/method/cursor.skip/#mongodb-method-cursor.skip
Obviously he same approach can be adopted by any other backend implementation.
They provide a skip method that allows to skip an arbitrary amount of items after the provided cursor.
This means my sample endpoint would look like the following:
/items?cursor=${cursor}&skip=${skip}&limit=${stopIndex - startIndex}
I then need to figure out the cursor and the skip values.
The following code could work to find the closest available cursor, given I store them together with the items:
// Limit our search only to items before startIndex
const fragment = items.slice(0, startIndex);
// Find the closest cursor index
const cursorIndex = fragment.length - 1 - fragment.reverse().findIndex(item => item.cursor != null);
// Get the cursor
const cursor = items[cursorIndex];
And of course, I also have a way to know the skip value:
const skip = items.length - 1 - cursorIndex;

How to limit .once('value) in firebase-admin node.js

How do I limit .once('value') in firebase-admin?
Code:
async function GetStuff(limit, page){
const data = await ref.limitToFirst(parseInt(limit)).once('value')
return data.val();
}
I wanted to create a page system, where a it sends request for a limited amount of data, and the user can change the page to get different data, but for some reason, I can't get it to work.
The code above only gets the first 20(when limit is 20), but how can I make it start at 20, so I can make this page feature.
I thought:
Code:
async function GetStuff(limit, page){
const data = await ref.startAt(limit*page).limitToFirst(parseInt(limit)).once('value')
return data.val();
}
You might want to review the relevant documentation. It looks like you're trying to pass the offset of a child to startAt, but that's not how startAt works. It accepts the actual value of the child to start at. Pagination by offset index is not supported.
The way you use startAt is typically passing the last sorted value retrieved by the prior query (or, if you don't want to retrieve that value again, 1 + that value, or a string that is lexically greater than the last string received. As such, some data sets might actually be difficult to paginate if they have the same sorted value repeated many times.

Extract the last n results from a Paged Search

I am attempting to get the last 200 results from a paged search in SuiteScript 2.0. When I run the simple code I get the error
"name":"INVALID_PAGE_RANGE","message":"Invalid page range: fetch."
What exactly am I doing wrong?
The below code was run in the NS debugger (I have removed some code for brevity):
function(ui, email, runtime, search, file, config, format, record, log) {
var mySearch = search.load({
id: 'customsearch_mht_lrf_export_to_lab'
});
// 200 results per page (am I correct here?)
var results = mySearch.runPaged({pageSize:200});
var count = results.count; // = 264
// Obtain the last 200 results. From the documentation;
// index is 'The index of the page range that bounds the desired data.'
// Error occurs on the next line
var data = results.fetch({index: results.count});
var x = 0;
});
(I've already answered this on the Slack group, but I'll copy my answer here in case someone one day has this question and comes across the post).
The index parameter that you pass to results.fetch is the index of the "page" of data that you want. In your example above, where you have 264 results and your page size is 200, there would be 2 pages of results. Results 1 - 199 would be on the first page (index = 0), and 200 - 264 on the second page.
In order to get the last 200 results, you will always need to retrieve the last 2 pages (unless the result count is an exact multiple of 200), and then just look at the final 200 of those results.

Dgrid-OnDemandGrid Virtualscrolling

I am using Dgrid OndemandGrid with Jsonrest store.On scrolling,I am taking 40 records from the database.
var grid= new OnDemandGrid({
store: jsonstore,
columns: Layout,
minRowsPerPage : 40,
maxRowsPerPage : 40,
loadingMessage: "Loading data...",
noDataMessage: "No results found."
}, "grid");
On first time ,I am getting the response Header as
Content-Range items=0-39/132
.On further scrolling ,the response Header is
Content-Range items=38-78/132 instead of 40-79/132.
Can someone tell me how to get the response as 40-79/132 , 80-119..etc..
Add queryRowsOverlap: 0 to the object you're passing to the grid constructor.
queryRowsOverlap defaults to 1, and is the reason the queries overlap. This property is intended to counteract issues with dojo/store/Observable "dropping" items at page boundaries, though it isn't a perfect solution.

Resources