I need to scroll blog-posts/latest news infinitely on a browser.
The way it should work is I get first 20 posts from server in a list. I render first one on browser. After I am close to x height from end of browser - that should load next post from list. While loading next post I need to make a call to analytics, advertisements and also change browser url with new title. Once I reach 20th post, I need to make a call to server to get next 20 posts and this continues.
My question is: what libraries are available to me to make a POC on this?
How do I compare them and which one to choose?
I need to make this project in nodejs and I am new to nodejs. Any available demos might help too
Since you are interested in crunching latest data so this can be achieved by server side pagination, say you'll have to query latest blog-post limited to 20 and also will have to keep track of page-cursor (means from where next query will fetch next 20 blog-posts). You are building in Nodejs so I assume your database to be Mongodb (Assuming MEAN Stack), you can write your own pagination logic but why to re-invent wheel? solutions are available to be used such as mongoose-paginate. This completes back-end part.
On front-end there are various plug-ins available for various frameworks such as:
1 - Don't want to use any plugin:
$(window).scroll(function () {
if ($(window).scrollTop() >= $(document).height() - $(window).height() - 10) {
//Add newly-crunched data at the end of the page
}
});
2 - In Angular use angular-ui pagination or ngInfiniteScroll
3 - In jQuery use infinite-scroll or jScroll
Here is tuts+ How to Create Infinite Scroll Pagination
Happy Helping!
Related
I want to create a page with articles. I do not want to load all articles at once though (because there are a lot and they have images). I want to make something like Facebook or 9gag has. They have a system and when you scroll it will automatically append items.
Can anyone point me in the right direction how to challenge this?
Should I request the articles JSON all in once (from server) or should I request them when I scroll?
You should load results as they are needed, the mechanism is generally called infinite scroll.
For angular4 you can look at https://github.com/orizens/ngx-infinite-scroll (haven't tried it myself but it looks like it will fit your needs)
I'm writing a quick python app to get stats on my public GitHub project.
When I call (https://api.github.com/repos/user/project/pulls), I get back some json, but because my project has more than 30 outstanding PRs, I get a Link response header with the next and last URLs for me to go call to get all PRs.
However, when I perform a parallel query for issues with a certain label (https://api.github.com/repos/user/project/issues?labels=label&status=opened), I only get 30 back (the pagination limit), but my response header doesn't have a next Link in it for me to follow. I know my project has more than 30 issues that match that label.
Is this a bug in the GitHub API, or in what I'm doing? Alternatively, I don't actually care about the issues themselves, just the count of issues with that label, so is there another way to just query for the count?
I'm trying to do a search on the v3 api using this url:
https://www.googleapis.com/youtube/v3/search?part=id,snippet&channelId=UCtVd0c0tGXuTSbU5d8cSBUg&maxResults=10&order=date&q=game&key=[API_KEY]
but this returns me only one playlist.
When I do this search on youtube site directly it returns more results to me:
https://www.youtube.com/user/YouTubeDev/search?query=game
Why this happens, is there something wrong that I'm doing?
We ran into a similar issue when we tried to search for large amounts of content. This is especially evident if you set the time range you're looking for using publishedAfter and publishedBefore to a very small range (say for example 1 hour). Even when we get to very small result sets (you can only paginate around 20 times on the API using pageToken back when we tried it, so it was when our totalResults were less than 1,000), we were finding actually only as little as 540 items.
We reached out to YouTube and our contacts there confirmed that the totalResults are just an estimate, and are not actually accurate. You may get up to the amount of items specified, but there is no guarantee that you will get exactly that. Your best bet is to capture as much as you can, and scan for data using a different time range.
Source: Reddit
In the first one you are using search->list method. Which is searching for channels?
In the second one you are doing a playlist search inside the channel.
You can do the same on API via playlists->list.
(Or if you want the videos inside the channel straight, use videos->list)
Might be a bug. If so and not yet filed, you can file it here: https://code.google.com/p/gdata-issues/issues/list?q=label%3aAPI-YouTube
The problem seems to be caused by the parameter order=date.
Adding order to the "YouTube query" (using channel): https://www.youtube.com/channel/UCtVd0c0tGXuTSbU5d8cSBUg/search?query=game&order=date ,is not different. However omitting order from the "api request" gives the same result (6 items): https://www.googleapis.com/youtube/v3/search?part=id,snippet&channelId=UCtVd0c0tGXuTSbU5d8cSBUg&maxResults=10&q=game&key=YOUR-API-KEY-HERE
Note, that with using order=date in the api request only 1 item is shown, while the same response shows totalResults": 6 (which seems to be right). I did not try all, but using order=relevance does not give this problem.
I've started to develop a Chrome extension to navigate and perform actions on a website. Until now the extension is able to receive a couple of parameters and check a set of radio-buttons, fill in a few inputs of a form and then submit it.
What I want to do now is to repeat the process, but I'm stuck when the page is reloaded. And I don't know how can I do to make the script react to the finish of the request.
The workflow I want to achieve is the following (is for automatically copying a certain object):
Popup side
Enter the number of the Master object to copy
Enter the base name of the copies (example Mod, so the I can iterate and add mod1, mod2, modn)
Enter the number of copies
Background side
Select master
Select standard options
Fill in inputs
Submit form
Wait for the page to complete the request and continue to the next copy. (here I need help)
The problem is on the repetition, the rest is taking care of. I assume that must be a way of dealing with requests. Any ideas?
By the way I'm doing it all with the extension and tabs methods of Google Chrome plus JavaScript and jQuery.
Ok, i´m going to answer the question myself based on Matthew Getner´s comment. The chrome.webRequest.onCompleted was the solution to the problem. With this method I was able to wait for the request to be completed and start over with the process. And with the messegaes methods I´ve achieved the comunication between the background and the extension itself. So I finally was able to filled a form, send it, and repeat. This way I´ve made a kind of robot to help a co-worker with a lame repetitive task on a aged web plataform.
I need to count the page impressions of every page on a TYPO3 site into the db.
So I think I need an extension which is called on every page impression and increase a column 'impressions' in the db of the specific page.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
(Update)
I want to add more information:
I don't need a counter which counts all PIs. The counter needs to be page-related. So it make sense to extend the pages-table from Typo3. Another need is that the extension should be done with extbase.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
Once your plugin is configured you can include it with page.1234 < plugin.tx_yourextension_pi1 on any page. 1234 determines the position on your page.
The script should be USER_INT, so it's not being cached (mind you, this will cost loads of performance as previously stated by #norwebian)
As you don't want to output anything, make sure the controller stays empty as well.
Did you do a quick search in the extension repository? Trying a search for "page counter" reveals four relevant extensions.
"Sys_stat" is the closest thing to an "official" solution, it is really just enabling a few settings already existent. It has been reported to fill up the database with too much data, though.
"Generic Visitor Counter" would be my favourite, I believe (if I was going for a page counter at all), it is recently updated and seems simple enough.
You should really consider a proper stats extension, though. Both ics_awstats and ke_stats have been in my toolset.
YMMV. Be aware that if your site is popular, stats gathering quickly gets out of hand. On the other hand, if you go for a simple counter, including uncached extensions will cost performance.
I am not sure if I really understood what you want and need. After all, page impressions are not the same as page views. I wouldn't know the difference "onpage" right now though. So am I right in assuming that you mean page views?
If yes: I would take the following approach:
A separate, autonomous extension with a JavaScript for asynchronous calling of an API and a table for storing page views / page impressions.
Each page globally binds a JavaScript that initializes itself.
Once the DOM is ready, it sends a call to an AJAX API endpoint with the URL of the page as a parameter.
The endpoint takes only the URL.
For each unique URL, a record including counter is created or updated.
Extending the table for the pages doesn't make sense to me. What are you doing with a website that consists of news overviews, news details, press and blog sections, a dealer search and a store with product pages?
I would keep the statistics table standalone.
If you expand the table a bit and add date and time - no simple increment of hits - you can even identify the hottest pages of the week, the month, etc.
--
My approach won't increase/delay page load time much, if at all, and will have little noticeable impact even on heavily requested websites.
With the AJAX endpoint, it's then up to you how you deploy it and how much of the CMS framework you want to load.