How can I do pagination? - node.js

What will be better? Make 1 request and get all articles for all pages or make a request for each page as needed? Now I am using second variant (make a request for each page as needed). What will be better?
P.S. After the request, all data will be written to Redux

It's usually better to paginate your results, otherwise you load an important amount of data for nothing, which can be slow if the user has limited bandwith. Very large quantities of data loaded in a web browser can also slow down the browser itself in some cases.
If your calls to get the results of 1 page take too long when browsing multiple pages, you could load 2 pages at once and have your UI immediately display the second page when the user clicks on 'next', while contacting the backend to get the 3rd page. That way you keep a reactive UI, while only loading what's necessary.

Related

How does a website serve a list of data to the user for page/slider content?

im specifically talking about how when a user goes to a web page with a list of data (articles as an example) does the website send every article (maybe in an array or something) and then the client filters out the data to only visualize the articles for the page that the client is currently viewing? i'm referring to back/next buttons that will display different "pages" of data not different actual web pages (changes in url) or if it doesn't work like that then how does it?
I am assuming, your question is basically without the URL changing the data in the page changes and how this takes place.
So, for this what happens behind the scene is that for every action performed by the user a request is sent and the corresponding data is retrieved and displayed, and this is irrespective of the URL.
for example initially,
When the user lands on the page the list of all articles is retrieved and displayed.
Now, if he applies a filter, a request is sent and then data received is displayed.
So as you can see this process is irrespective of changes in the URL.

Getting very slow ajax responses when requesting product page

I'm building a Chrome extension for the steam community, and this extension has some functions to get the user games list and scrap their product page to get some information about every game.
Any of the requests done by the extension to the API is almost instant, but when I try to load an app page (like for example http://store.steampowered.com/app/440/ ), the response is delayed up to 40s.
At the end, the page loads and the ajax request is successful, but the waiting times are just too high.
I have noticed that this behaviour is not consistent, and on other computers with the same exact extension code and internet connection theresponse is instant, and on the same computer with other browsers the delay is not there either.
Why could this happen? I'm not starting multiple ajax calls, and the extension waits before finishing a call before making the next one.
Please, see attached image
Thanks

How do you prevent crawling from your web site?

I am running a website on IIS with more than 1000 page links at pagination and I want to prevent others to crawl/steal these pages by running a crawler script and get the info page by page.
Is there any way to understand the request if it is a user request or being ran by a script? or maybe some filters for this on highest level before coming to request?
You can't prevent automated crawling.
You can make it harder to automatically crawl your content, but if you allow users to see the content it can be automated (i.e. automating browser navigation is not hard and computer generally don't care to wait long time between requests).
One option is to require single "user" (either authenticated or not) to have some minimal delay between requests (i.e. 1-5 seconds). This way generic crawling will not be useful (require some "user id" in request and delay between requests), and one would have to write custom crawling code which is clearly more time intensive.
Note that writing special "crawler" for your site may be considered as "noble" action and significantly increase incentive to create one (i.e. check out "how to make Google maps available offline" questions).

How would one technically describe this desired website functionality(involving timeout, log-in)

On websites like eBay, if you would time-out of your session, and say you were looking at a shoe, when you come back(after your sleep) to the page, you'd see the shoe, but you're logged out.
However, once you log back in, you get directed to that shoe immediately.
I am thinking that I say : "After timeout from website, upon re-login go back directly to page where timeout happened."
But how is this functionality described(as in, what technologies will we use)? Also, is it something that needs alot of resources?
Quite often it's done by having a "Return To" url passed along to the login page.
So.... (Logged in)
Visit Shoe Page
Session Times out
Either via js on timeout or next page refresh, user will see logged-out shoe page
Login link on that page includes the url of the shoe page eg Login.php?ReturnTo=ShoePage.php
Note that this applies to websites. You've also added a web service tag which is completely different - web services have no concept of a "current page"
If you decide to store the last page for the user in the db, what happens if the last page visited is no longer valid? You'd also be adding 1 Db operation per page load (to update the last visited page). No real performance concern but worth knowing. It's slightly non-standard behavior so you'd need to make sure the user knows why they've been redirected

PageLoad not called in Sharepoint web part

I have a number of pages with custom harepoint visual web parts. In the page load of these web parts, i am doing some logic which i need to trigger every time the page loads. the problem is that when i use the browser back button, or javascript, to redirect the user to the previous page, the pageload is not being invoked. it seems like the page is being retrieved from a cache. can this be disabled easily? is there any other workaround to ensure that the code fires every time the page is rendered?
Using the back button will load from cache, you are correct in that.
To disable cache, you need to set an "expires" = -1 meta tag in the head section of the page but this seems a bit drastic in order to fire logic for a page.
I'd suggest using the jQuery document ready approach rather than page load. This will fire regardless of where the page information is loaded from.
$(document).ready(function() {
// Insert code here
});

Resources