Best Approach to Scrape Paginated Results using import.io - pagination

There are several websites within the cruise industry that I would like to scrape.
Examples:
http://www.silversea.com/cruise/cruise-results/?page_num=1
http://www.seabourn.com/find-luxury-cruise-vacation/FindCruises.action?cfVer=2&destCode=&durationCode=&dateCode=&shipCodeSearch=&portCode=
In some scenarios, like the first one shown, the results page follows a patten - ?page_num=1...17. However the number of results will vary over time.
In the second scenario, the URL does not change with pagination.
At the end of the day, what I'd like to do is to get the results for each website into a single file.
Q1: Is there any alternative to setting 17 scrapers for scenario 1 and then actively watching as results grow/shrink over time?
Q2: I'm completely stumped about how to scrape content from second scenario.

Q1- The free tool from (import.io) does not have the ability to actively watch the data change over time. What you could do is have the data Bulk Extracted by the Extractor (with 17 pages this would be really fast) and added to a database. After each entry to the database, the entries could be de-duped or marked as unique. You could do this manually in Excel or programmatically.
Their Enterprise (data as a service) could do this for you.
Q2- If there is not a unique URL for each page, the only tool that will paginate the pages for you is the Connector.

I would recommend you to build an extractor to get the pagination. The result of this extractor will be a list of links, each link corresponding to a page.
This way, every time you run your application and the number of pages changes, you will always get all the pages.
After that, make a call for each page to get the data you want.
Extractor 1: Get pages -- Input: The first URL
Extractor 2: Get items (data) -- Input: The result from Extractor 1

Related

NodeJS PDF Generator Data Across Multiple pages

We have an NodeJS API, that is used to generate PDF's. We use the Handlebars API server side to bind data to HTML templates (i.e. header, footer, content), then we use Puppeteer to generate the PDF's from the bound template. This approach works great when the data fits on one page.
However, we are trying to determine the best approach when the data spans multiple pages. We are not quite sure how to do this with the current technology.
With the data sorted, the layout is a four column table with the first column having the data sorted down, then once the column is full, it goes to the next column, then the next column, until all columns are full. One all the columns are full go to the next page.
Here is an example of what flow of the data needs to look like this across multiple pages:
https://lucid.app/lucidchart/83cf06f7-7e8a-4eb3-8059-fafd598318a6/view?page=0_0#?folder_id=home&browser=icon
Once the pages have been created they will be sent to the consumer as a single PDF.
Will puppeteer do this paging for us? If so, we have not had much success on how to tell puppeteer to do this. Do we have to create PDF's for each page and somehow stitch them together in a single document?
We are open to ideas on how to do this with the current technology (NodeJS, puppeteer, handlebars) or using a different NodeJS technology. The only requirement is it has to be server side and run in NodeJS.
Again we are open to any ideas on how to do this?
Examples are encouraged.

Unable to extract data using Import.io from Amazon web page where data is loaded into the page via Ajax

Anyone know how to extract data from a webpage using Import.io where the data is loaded into the page via Ajax?
I am unable to extract data from below mentioned pages.
There is no issue in first page data extraction, but how do I move on to extract data from second page?
URL is given below.
<http://www.amazon.com/gp/aag/main?ie=UTF8&asin=&isAmazonFulfilled=&isCBA=&marketplaceID=ATVPDKIKX0DER&orderID=&seller=A13JB7253Q5S1B>
The data on that page is deployed using an interesting mix of technologies; it relies heavily on server side code and Javascript. That type of page can be a challenge, however, there are always methods to get the data. For example, some sellers have a page like this:
http://www.amazon.co.uk/gp/node/index.html?ie=UTF8&marketplaceID=ATVPDKIKX0DER&me=A2WO1PQ2OIOIGM&merchant=A2WO1PQ2OIOIGM
Which is very easy to extract data from, even using the magic algorithm - https://magic.import.io/?site=http:%2F%2Fwww.amazon.co.uk%2Fgp%2Fnode%2Findex.html%3Fie%3DUTF8%26marketplaceID%3DA1F83G8C2ARO7P%26me%3DA2WO1PQ2OIOIGM%26merchant%3DA2WO1PQ2OIOIGM
I had to take off the redirect=true from the URLs before it would work - just an FYI.
Other times some stores don't have such a URL, its a bit of a pain, and there URLs can be tough to figure out.
We do help some of our enterprise customers build bespoke APIs when the data is very important to them, so do feel free to get in touch. I imagine a larger scale workaround would be to create a dataset/API based on a the categories you are interested in and then to filter that larger dataset down (python or CSV style) by seller name. That would probably work!
I managed to get a static dataset but no API. You can find that dataset at the following GUID: c7c63f1c-7081-4d4a-ad91-afe9789a6620
Thanks

Pulling two different sets of data from the same document library in a single page SharePoint 2013

I have a document library set up with multiple different categories of document, and I'm using a metadata column to differentiate between them.
I want to be able to display two different document library web part on a page for different categories of file side by side. This is simple for one category, I just set up a list view filtered by the metadata column, but when I add a second web part alongside the first, it breaks the first one.
I have no idea why this is happening, but it seems like SharePoint isn't happy with pulling two sets of data from the same document library.
When I am editing the web parts, I can get them to both display the documents I want, but then when I click save, the first web part empties.
Not sure what other information would be useful for diagnosing or helping with the problem, so if I haven't given enough detail let me know. I am familiar with SPD as well as developing through the web interface, so if this needs a more complex solution that's fine with me!
Having spent some more time playing around with this, it struck me that I could probably achieve what I wanted using something other than a Document web part, and I was right.
Instead of using the somewhat inflexible document web part, I created a content query web part which only searched within the document library from my site, and filtered by the metadata column.
This way I can create as many queries as I like and they don't interact with each other in weird ways. It also has the advantage of being significantly easier to customise the output without needing to resort to SharePoint Designer.
Content Queries are the answer!

TYPO3: How to count page impressions on every page with an extension

I need to count the page impressions of every page on a TYPO3 site into the db.
So I think I need an extension which is called on every page impression and increase a column 'impressions' in the db of the specific page.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
(Update)
I want to add more information:
I don't need a counter which counts all PIs. The counter needs to be page-related. So it make sense to extend the pages-table from Typo3. Another need is that the extension should be done with extbase.
I'm new to typo3 and new to extension development as well. Is there a way to include an extbase-extension on every page so some php-script get called?
Once your plugin is configured you can include it with page.1234 < plugin.tx_yourextension_pi1 on any page. 1234 determines the position on your page.
The script should be USER_INT, so it's not being cached (mind you, this will cost loads of performance as previously stated by #norwebian)
As you don't want to output anything, make sure the controller stays empty as well.
Did you do a quick search in the extension repository? Trying a search for "page counter" reveals four relevant extensions.
"Sys_stat" is the closest thing to an "official" solution, it is really just enabling a few settings already existent. It has been reported to fill up the database with too much data, though.
"Generic Visitor Counter" would be my favourite, I believe (if I was going for a page counter at all), it is recently updated and seems simple enough.
You should really consider a proper stats extension, though. Both ics_awstats and ke_stats have been in my toolset.
YMMV. Be aware that if your site is popular, stats gathering quickly gets out of hand. On the other hand, if you go for a simple counter, including uncached extensions will cost performance.
I am not sure if I really understood what you want and need. After all, page impressions are not the same as page views. I wouldn't know the difference "onpage" right now though. So am I right in assuming that you mean page views?
If yes: I would take the following approach:
A separate, autonomous extension with a JavaScript for asynchronous calling of an API and a table for storing page views / page impressions.
Each page globally binds a JavaScript that initializes itself.
Once the DOM is ready, it sends a call to an AJAX API endpoint with the URL of the page as a parameter.
The endpoint takes only the URL.
For each unique URL, a record including counter is created or updated.
Extending the table for the pages doesn't make sense to me. What are you doing with a website that consists of news overviews, news details, press and blog sections, a dealer search and a store with product pages?
I would keep the statistics table standalone.
If you expand the table a bit and add date and time - no simple increment of hits - you can even identify the hottest pages of the week, the month, etc.
--
My approach won't increase/delay page load time much, if at all, and will have little noticeable impact even on heavily requested websites.
With the AJAX endpoint, it's then up to you how you deploy it and how much of the CMS framework you want to load.

How would I best make this SEO_able?

I have a search engine that searches albums.
For each music album, I have a page.
So, the work flow goes like this:
People search for music titles
The search engine displays a list of albums.
People click on an album to go to a details page.
I want google to index my front page and the details page. I want the details page to be highly ranked. How can I build a sitemap for this?
By the way, I have about 5 million albums (but I want the top 1000 ones to be highly ranked on google)
You would not use a sitemap for that many results. You would want each album to appear as a page with a unique URI to reference that page. That way the search engine can crawl your site by crawling links since search bots cannot submit form data. Each of those URIs should be simple, meaning limited to this part of the URI syntax:
scheme://authority_segment/path
Program your web application to remove and throw away any extraneous data, such as query string or parameters. If you do this you have to be sure that you are watching for URI poisoning or SQL injection even through means of character encoding.
How can I build a sitemap for this?
By pulling the addresses out of your database and creating a XML file with a high priority for some selected pages. Somehow I think that isn’t your real question …
If I wanted to automate building a site map for a site like this, I'd employ Python. I'd pretty much write everything from the ground up (except the data store access). The format is quite simple.
I'm not sure I quite understand your question...

Resources