i've written a batch in nodejs to get the feeds from a particular facebook user-id; the nodejs (version 0.10.33) code uses node module fb version 0.7.0 (https://www.npmjs.com/package/fb) .
the batch's request is scheduled every 5 minutes .
the facebook API response usually gives me back many feeds with a few minutes delay between the request's time and the created_at/updated_at feed values .
however sometimes i get some missing feeds that are visible in the user's page on the request's time.
after many requests and many hours i finally get those missing feeds but the strange thing is that their created_at/updated_at fields values are the same as their publishing time.
some idea ?
finally i found that the admin of a facebook page can decide to post his feeds in every date (past and future) he wants by the functions "RESCHEDULE" or "PROGRAMMED".
so, even if the feed is written now (call it time1), it appears on the user's page at the decided date (call it time2) ; the creation date on the page is time1 and the facebook's api for this feed returns created_at/updated_at=time1 .
when at time2 i launch my batch i get back created_at/updated_at=time1 for that feed : this is the final reason i often get a big delay between the feed's publishing date and the batch's request time.
Related
I have prepared a simple demo with react-router-dom 6 and react query.
I have a couple of routes and a fetch call that takes place on the first route (Home).
What I want to achieve is navigating to the About page or any other page and do not perform another request for a certain time period (maybe never again) but if I refresh the page I want to be able to re trigger the request to get the data.
I have tried using staleTime when but if I refresh the page I get no results, just a blank page. refreshInterval works on refresh but does not keep the data when I change routes.
I have also tried this pattern in this article but still I don't get the job done.
Probably it may be that something I don't understand but the question is how do I avoid making the same request over and over again, perfrm it only once but still being able to get the data if I refresh the page when navigating between different routes.
Demo
The solution to the problem came from one of the maintainers on the official github repo eventually and is related to adding placeholderData as an empty array instead of initialData and setting staleTime to Infinity in my case as I only want to perform the request once.
Setting placeholderData gives you the opportunity to show some dummy data normally until you fetch the real but in my case it seems to do the job. More to read regarding this at this source
const { isFetching, data: catImages } = useQuery({
queryKey: ["catImages"],
queryFn: getCatImages,
placeholderData: [],
staleTime: Infinity
});
how can you manage and endless scroll using a real time server connection instead pooling each time the user come to the end of the list on nodejs and react?
thanks
User pagination on the server and manage that on the frontend, when you react to end of the list then try to fetch next page data and just assign that data to the reactive variable. it will bind on FE after that.
It can't be possible to fetch at once if the number of records are more like in 5-6K+ range. If you fetch at once then it will take more than one min maybe more than that, so until the user can't wait on the same page he might reload the page or leave the page. you need to call API each time for better UI/UX.
You can use directives like this.
Just focus on scroll if you get events like user reached the end of the list then simply print something like this (use opacity to get more focus)
if your API returns zero new data then just print something like this
Edit after comment:
No, you don't need to set an open connection to the server, just call an API when you reach the end of the list, like you calling only when you want to fetch more data from the server.
if you fetch the live data and you think that duplicates might be present then you can filter at the backend and remove duplicates via matching unique key. if that's not feasible then you can remove on FE also because you have less no of data on FE as compare to BE.
SO like if you're calling a third party API to get data, so first fetch all records if you can and store on your DB then call an API from your FE to get data from your own DB it will make a simple and fast rendering.
I am using SS 1 and a Suitelet to load a list of associated assembly items and then, the user checks which ones to mod. I mod them but I am running up against that limitation of script usage. Well I offloaded this to a restlet and I did the user auth for the Authorization (username password) now I am getting an
"INVALID_HOST", "message" : "Invalid host debugger.sandbox.netsuite.c" message in the response from 'nlapiRequestURL' I am calling the script from a Suitelet, how can it be an invalid host? Any help would be great, thanks
Need more info to solve the Invalid Host issue, could be caused by something in the code as I noticed the url is incomplete, if you paste it I could be able to help you further.
Taking a step back to what you are actually trying to achieve you can proceed in two ways:
1: Process the user input immediately. (The user would have to wait until the process ends)
2: Schedule a batch job. (This is the recommended option for large operations)
If you decide to go with option 1(Process Immediately) then I suggest setting up a clientside function attached to a button on the form. Additionally you would need to accept POST request in your Suitelet to receive the data and process it. Your clientside function would process the items in a loop and would pass each item to the Suitelet for processing using ajax calls (Dont use nlapiRequestURL as it would use governance points). If you want to get fancy you can even add a callback function to your ajax call and display a progress bar so every time one of the records is processed by the Suitelet your progress bar would be updated and at 100% show as "Complete".
If you go for option 2 (Schedule a batch job) then you can pass the data as a parameter to a Scheduled Script using nlapiScheduleScript(scriptId, deployId, params), process the data in a loop and have it send an email to the user at the end. Preferably you would want to use a Map Reduce script but that's on SS 2.0.
I'm working on an instagram scraper for something and I'm trying to figure out if it's possible to get all photos for a tag that have an id or timestamp later than the last one I have.
The instagram API docs are useless in that they don't have any real info on pagination (which I presume I'll have to abuse).
Does anyone have any ideas?
I've been slogging through the Instagram API for the last couple of days so here's my 2 cents worth:
As far as I can see it if you call the api with /tags/tag-name/media/recent it only return a list if items. If the amount exceeds about 25 you have to make another request with the pagination value returned in the previous request.
In order to gain some control I am initially iterating through all images and storing the results (just the URL not the actual image) to a database. Now I can manipulate however I want. When I feel like updating (I'm doing it manually now but could be a cron job or use the real-time api) I re-read all the images, compare to what I have in my DB and add possible new images. My app then reads out the url and info from my DB (which btw is a heck of a lot faster than going through the instagram api, which will only return about 25 images per request - regardless of any 'count' parameter value you put in the request url) and displays it.
I am developing this for a client who is afraid of people posting nsfw or whatever pics using their dedicated hashtag (for a contest) - with the above set up I can offer them an interface where they can check and mark images that are then displayed in the app.
One thing to watch out for is when a user deletes his picture; you will have to find a way to check for this. Currently (since I'm lazy) I load all images and use jquery to check for an error loading the image. If there is one I delete the image from the DB (via ajax).
I'm not sure the pagination is going to help you: as far as I can see the pagination response has no relation to the id's of the actual image objects on each page - so theoretically a pagination id that jumps to a certain page (i.e. date) might not work tomorrow if enough images have been deleted in the mean time.
to get all images instead of latest 20, just append &count=-1 to your api call - it's that simple.
In either case, there is a timestamp on each json object - or if you prefer, you can use max_tag_id
check out my post here: there any way to show more than 20 photos of the instagram API?
* Update April 2014: count=-1 is no longer available.
Some web design questions.
Combine POST with GET?
When user clicks a button, I need to send one POST to submit a form, then need to GET a json object to replace some DOM fields. Should I combine them into one request so save one round trip time?
Multiple GET json request?
When user clicks a button, I need to GET 3 or 4 json object. Should I send 3 or 4 GET request, or just one and combine the json into one large json at back-end?
So basically, I'm not sure which is heavier: round trip time VS a little complexed back-end and front-end logic.
Any comment is welcome!
if i understood your question right...you have a dependency on your get requests...that is to say the second get depends on the first...if that is the case, obviously you need to take consecutive get operations...however, if that is not the case that is if you know the order of get request and the response won't be affected by local conditions...then i suggest you do post/get operations on server side...trigger the first one an let the server handle the rest and get the result...
of course you would not want users do multiple get requests for one simple operation...