Download images for google chrome extension - google-chrome-extension

i'm trying to build a batch image downloader in chrome. Basically, i will overlay a small download square to each image on the page and user clicks on it to download. Or the user can click to download all images on a page. I'm currently stuck on figuring out how to download the images. The best i can come up with is to use XHR to send the image to another server, the user can then retrieve it there.
If anyone have a solution for me. It would be much appreciated!
Jason

I believe you can XHR the images and using the File API you can store them locally.
Take a look at the following site http://www.html5rocks.com/features/file there are additional resources on the right column that has detailed examples and tutorials. Such as http://www.html5rocks.com/tutorials/file/filesystem/
Mohamed Mansour

This code will do the trick for you: https://gist.github.com/1049553
It's very simple usage of a 'feature' in chrome when you open an image in a new tab.

Related

How to download file directly to aws storage using lambda

im working on integrating zoom into my application and im stuck at a certain point.
So basically i want to get the zoom recordings of a user and download it into my aws s3 bucket.
using the zoom api to get recordings give you two links, a play link which leads you into zoom ui with the video of the recording and a download link. I want the users zoom recordings to play on my website using my ui, therefore i need to use the download link. Therefore i want to use my back end server (node) to get the download link and download the file then upload the file to aws.
Is that in anyway possible? or is there another way i could go about this problem?. Please i am in dire need of this help.
Thanks.

Google Page Insights Image Links

It looks like I can't get the image links to optimize my sites like I could in the previous version. Is there a way to get these links?
Thanks!
Hey Ben i am having the same issues, what he is talking about is before page insights was with lighthouse as it is now, we were given the exact resource needed in the form google requested them to be. So if we had an image that was 4 mb and 2000x2000 but the view port of where the image was lets say 300x300, google would provide that picture in a zip folder along with all other photos in the same boat. Also if javascript or css needed minification it also provided those files for us. I do not see that option at all any longer and its really disapponintng as it saved me two steps of optimiaztion in regards to page speed and hoping we can get it back!?

Openstreetcam extract image and gps location

I would like to use python to download the image and sequences of images found in the location on openstreetcam.
http://openstreetcam.com/details/8552/422
I figured out the image is saved under
http://api.openstreetcam.org/files/photo/2016/6/30/lth/8552_2fbf0_57756eba868e9.jpg?v=1518090956232
however there is no official API to use. How would one extract the image and gps data?
Edit: The GPS data can be found in the url by clicking Edit OSM id.
Ideally one would use some sort of web scraper however the .jpg is not found in the website source code.
OSC's endpoints are a little hidden.
Invite you to check out https://github.com/Streets-Data-Collaborative/osc-tools where I've written some scripts to extract track data and the underlying metadata for each track.
Feel free to open an issue on the repo if something's not working.

Need to catalog a large web application

We have a web application with over 560 pages. I would like a way to catalog the site somehow so that I can review the pages (without having to find each on in the menu or enter the URL). Be very glad for ideas on the best way to go about this.
I'd be happy to end up with 560 image files or PDFs, or one large PDF or whatever. I can easily put together a script with all the URLs, but how to pull those up and take a snapshot of some sort and save that to a file or files is where I need help.
The site is written in Java (server) and javascript (client).
I found a great plugin for Firefox that made this relatively painless. The plugin is called Screenshot Pimp (hate the name, love what it does). It takes a snapshot of your browser contents and immediately saves it to a file on your hard drive.
So then I wrote a script that would pull each page up in an IFrame with the URL showing above that, and took snapshots of each page. It took a couple hours to cycle through the whole set of 560+ pages, but it worked great, and now I have a catalog of all the pages.

How to manage detailed description and links in google?

A client would like to have his website implemented like the following one, on google:
https://www.google.ch/search?q=mlzd&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:fr:official&client=firefox-a
With:
a list of links to the differents sections of the website.
the google function with the arrow (on rollover of the links) to get previews of the website pages.
So, here are my questions:
Do you know to implement the several links? Is it tags?
Do I then have to do something to get the google images previews, or is it automatic?
Will the google images previews work, with a Flash website? If not, the preview will be a screenshot the website with no flash enabled?
Thank you a lot for your answers!!
David
The links are called Sitelinks, they are automatic. You cant do anything to sepefically enable them.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=47334
The preview is automatic, but you do need to make sure that 1) allow google to cache the url (ie avoid "nocache" and 2) not block in anyway the Previews https://sites.google.com/site/webmasterhelpforum/en/faq-instant-previews
See link in the previous answer

Resources