Load items while scrolling - Angular 4 and nodejs - node.js

I want to create a page with articles. I do not want to load all articles at once though (because there are a lot and they have images). I want to make something like Facebook or 9gag has. They have a system and when you scroll it will automatically append items.
Can anyone point me in the right direction how to challenge this?
Should I request the articles JSON all in once (from server) or should I request them when I scroll?

You should load results as they are needed, the mechanism is generally called infinite scroll.
For angular4 you can look at https://github.com/orizens/ngx-infinite-scroll (haven't tried it myself but it looks like it will fit your needs)

Related

Obfuscate Images in EaselJS

Is there any way to protect your sprites on EaselJS?
Currently is too easy to download the sprites.
On chrome just go to console -> resources like this
I made a resarch before i made this answer and found this topic .
That could be very nice. Also we don't need to save the slices in a json like he said, if we have a shuffle seed.
But, i didn't find any thing in nodejs(back-end) to make this image shuffle.
I tried Node GM but its looks too complicaded to bind a image on top of another with (w,h,x,y,offsetX,offsetY)
I know always will have a way to "hack" the resource. But at least offer some difficult.
One of the simple approaches is to encode images to base64, store them as part of Javascript and decode at runtime. See:
Convert and insert Base64 data to Canvas in Javascript
But obviously this will increase download size.
Personally, I would not go this route for "normal" applications or games, unless it is really justified or put on me as an external requirement. For example, one can easily extract assets from the android APK, but this does not seem an area of concern for most of the developers.
The user's browser downloads those images whether you want it or not. Otherwise, they wouldn't be able to display them.
At any given time, any user can just right click on any image on the site and click SAVE AS, you can't stop it, and you shouldn't try.
If you don't want people downloading your work, don't put it on the public facing internet.

Unable to extract data using Import.io from Amazon web page where data is loaded into the page via Ajax

Anyone know how to extract data from a webpage using Import.io where the data is loaded into the page via Ajax?
I am unable to extract data from below mentioned pages.
There is no issue in first page data extraction, but how do I move on to extract data from second page?
URL is given below.
<http://www.amazon.com/gp/aag/main?ie=UTF8&asin=&isAmazonFulfilled=&isCBA=&marketplaceID=ATVPDKIKX0DER&orderID=&seller=A13JB7253Q5S1B>
The data on that page is deployed using an interesting mix of technologies; it relies heavily on server side code and Javascript. That type of page can be a challenge, however, there are always methods to get the data. For example, some sellers have a page like this:
http://www.amazon.co.uk/gp/node/index.html?ie=UTF8&marketplaceID=ATVPDKIKX0DER&me=A2WO1PQ2OIOIGM&merchant=A2WO1PQ2OIOIGM
Which is very easy to extract data from, even using the magic algorithm - https://magic.import.io/?site=http:%2F%2Fwww.amazon.co.uk%2Fgp%2Fnode%2Findex.html%3Fie%3DUTF8%26marketplaceID%3DA1F83G8C2ARO7P%26me%3DA2WO1PQ2OIOIGM%26merchant%3DA2WO1PQ2OIOIGM
I had to take off the redirect=true from the URLs before it would work - just an FYI.
Other times some stores don't have such a URL, its a bit of a pain, and there URLs can be tough to figure out.
We do help some of our enterprise customers build bespoke APIs when the data is very important to them, so do feel free to get in touch. I imagine a larger scale workaround would be to create a dataset/API based on a the categories you are interested in and then to filter that larger dataset down (python or CSV style) by seller name. That would probably work!
I managed to get a static dataset but no API. You can find that dataset at the following GUID: c7c63f1c-7081-4d4a-ad91-afe9789a6620
Thanks

Given a URL retrieve the largest image on that page with Node

I'm looking to build a feature into an Angular.js web app that allows a user to paste a url to an eCommerce site like Amazon or Zappos and retrieve the main product image from that page. My plan is to post the url to my express API and handle the image retrieval on the server.
My initial plan was to download the raw html, parse it out with htmlparser, select all the html image elements with soupselect and retrieve their src attributes. Ideally I would like to implement a solution that would work across any site, and not just hardcode values for a particular retailer's site (using specific known css class names). One of the assumptions I made was that the largest image on the page would likely be the main product image, with this logic I decided I would try to sort the images by file size. My idea was to make a http head request with the src url for each of the images to determine their size with the content-length header property. So far this approach has worked well but I would really like to avoid making so many http requests even if they are only head requests.
I feel there is a better way of doing this, would it be easier to use something like PhantomJS to load the entire page and parse it that way? I was trying to make this work as quick as possible and thus avoiding downloading all of the images. Does anyone have any suggestions?
I would think the best image to use isn't the one with the largest file size, but the image that is displayed largest on the page. PhantomJS might be able to help you determine that. Load the page, but instruct PhantomJS not to load images. Then pick the image element whose calculated dimensions are biggest. This will only work if the page uses CSS or width and height attributes on the img to give it dimension.
Alternatively, you could send the image URLs back to the client, and have the client fetch the images and figure out which is biggest. That limits the number of requests your server has to make, and it allows the user to quickly pick a different image if the largest isn't the best.

can you have "variables" in text in google sites?

Sorry, this is a bad question. I don't even know what the title should be. I'm a total noob at making websites so this might be easy to find but I just don't know the terminology to search for. I cannot find anything about how to do this...
What I want to do is have something like references/variables that I can use in a block of text and it will automatically get replaced with whatever value should be there. Best way I can think of to describe it would be if I was using the site as a design doc for a game or something, I would be able to type in [Title] or something similar on any page and when it loads that text would be replaced with whatever my Title is. That way If I ever change titles, names, classes, races, places, items, etc... they would only have to be changed in 1 place and the change would be reflected everywhere.
I notice if I add a link to a page it will automatically use the Title of that page as the text of the link. That is almost exactly what I want. Except when I change the Title of the other page the text of the link remains as the original text. It doesn't get updated to the new Title and that is not at all what I want.
Also, I want to do this in Google Sites and as simply as possible. I don't really want to use a database. I was hoping Google Sites would have some kind of funcionality for this.
I don't believe this is possible (on Google Sites) and likely you need to consider a hosted solution.
Quoting the answer from this relevant post:
You should consider hosting your solution using Google's App Engine
instead of Google Sites. You can set it up so it uses PHP (see link
below), you can configure it to use your domain name and you get
enough CPU, disk and bandwidth allowance to serve around five million
page views for free each month, if you are serving more than that,
their prices are extremely competitive.
Google App Engine:
http://code.google.com/appengine/docs/whatisgoogleappengine.html How
to setup PHP using Google App Engine: http://blog.caucho.com/?p=187
Also I'm not sure how your PHP skills are but if you're unfamiliar with it then this should help to get you started.

TYPO3: How to count page impressions on every page with an extension

I need to count the page impressions of every page on a TYPO3 site into the db.
So I think I need an extension which is called on every page impression and increase a column 'impressions' in the db of the specific page.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
(Update)
I want to add more information:
I don't need a counter which counts all PIs. The counter needs to be page-related. So it make sense to extend the pages-table from Typo3. Another need is that the extension should be done with extbase.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
Once your plugin is configured you can include it with page.1234 < plugin.tx_yourextension_pi1 on any page. 1234 determines the position on your page.
The script should be USER_INT, so it's not being cached (mind you, this will cost loads of performance as previously stated by #norwebian)
As you don't want to output anything, make sure the controller stays empty as well.
Did you do a quick search in the extension repository? Trying a search for "page counter" reveals four relevant extensions.
"Sys_stat" is the closest thing to an "official" solution, it is really just enabling a few settings already existent. It has been reported to fill up the database with too much data, though.
"Generic Visitor Counter" would be my favourite, I believe (if I was going for a page counter at all), it is recently updated and seems simple enough.
You should really consider a proper stats extension, though. Both ics_awstats and ke_stats have been in my toolset.
YMMV. Be aware that if your site is popular, stats gathering quickly gets out of hand. On the other hand, if you go for a simple counter, including uncached extensions will cost performance.
I am not sure if I really understood what you want and need. After all, page impressions are not the same as page views. I wouldn't know the difference "onpage" right now though. So am I right in assuming that you mean page views?
If yes: I would take the following approach:
A separate, autonomous extension with a JavaScript for asynchronous calling of an API and a table for storing page views / page impressions.
Each page globally binds a JavaScript that initializes itself.
Once the DOM is ready, it sends a call to an AJAX API endpoint with the URL of the page as a parameter.
The endpoint takes only the URL.
For each unique URL, a record including counter is created or updated.
Extending the table for the pages doesn't make sense to me. What are you doing with a website that consists of news overviews, news details, press and blog sections, a dealer search and a store with product pages?
I would keep the statistics table standalone.
If you expand the table a bit and add date and time - no simple increment of hits - you can even identify the hottest pages of the week, the month, etc.
--
My approach won't increase/delay page load time much, if at all, and will have little noticeable impact even on heavily requested websites.
With the AJAX endpoint, it's then up to you how you deploy it and how much of the CMS framework you want to load.

Resources