I would like to use python to download the image and sequences of images found in the location on openstreetcam.
http://openstreetcam.com/details/8552/422
I figured out the image is saved under
http://api.openstreetcam.org/files/photo/2016/6/30/lth/8552_2fbf0_57756eba868e9.jpg?v=1518090956232
however there is no official API to use. How would one extract the image and gps data?
Edit: The GPS data can be found in the url by clicking Edit OSM id.
Ideally one would use some sort of web scraper however the .jpg is not found in the website source code.
OSC's endpoints are a little hidden.
Invite you to check out https://github.com/Streets-Data-Collaborative/osc-tools where I've written some scripts to extract track data and the underlying metadata for each track.
Feel free to open an issue on the repo if something's not working.
Related
im working on integrating zoom into my application and im stuck at a certain point.
So basically i want to get the zoom recordings of a user and download it into my aws s3 bucket.
using the zoom api to get recordings give you two links, a play link which leads you into zoom ui with the video of the recording and a download link. I want the users zoom recordings to play on my website using my ui, therefore i need to use the download link. Therefore i want to use my back end server (node) to get the download link and download the file then upload the file to aws.
Is that in anyway possible? or is there another way i could go about this problem?. Please i am in dire need of this help.
Thanks.
It looks like I can't get the image links to optimize my sites like I could in the previous version. Is there a way to get these links?
Thanks!
Hey Ben i am having the same issues, what he is talking about is before page insights was with lighthouse as it is now, we were given the exact resource needed in the form google requested them to be. So if we had an image that was 4 mb and 2000x2000 but the view port of where the image was lets say 300x300, google would provide that picture in a zip folder along with all other photos in the same boat. Also if javascript or css needed minification it also provided those files for us. I do not see that option at all any longer and its really disapponintng as it saved me two steps of optimiaztion in regards to page speed and hoping we can get it back!?
Ive been trying to get a VR View setup on my page following the examples and such at https://developers.google.com/vr/concepts/vrview, the image i'm using is a cardboard camera 'photo' copied from my device, but i've also used a regular jpg version just to be sure.
No matter what i try when the widget loads it only ever shows the error message
Render: Unable to load Texture from image.jpg
I've also noticed a bunch of tutorial or example site having the same issue that i assume they didn't have when they first posted the pages.
Does anyone have a clue why its doing this and how to fix it?
The image that is produced by Cardboard Camera is not in the correct format. VR view requires a equirectangular-panoramic image, and for stereo images, they need to be stacked. See https://developers.google.com/vr/concepts/vrview#supported_formats for Reference.
There is a link to convert Cardboard Camera images to the correct format:
https://storage.googleapis.com/cardboard-camera-converter/index.html
There are also a couple codelabs that walk through using VR view, including converting the image to the correct format:
https://codelabs.developers.google.com/?cat=Virtual+Reality
Primary reason for the error you have mentioned is "CORS". Cross-Origin Resource Sharing.
Your image is not accessible to the calling iframe script. Which is hosted on Google servers.
http://enable-cors.org/
Once you enable CORS, it will work. The reason you have mentioned that it started working once you cloned it locally, is the same. Now the vr script and image are having same origin :)
This is sort of a statistics question. I am looking for a website analyser, not quite like google analytics. I want the analyser to crawl the website itself and record all the data on a page. Images, size of image and so on.
Even if it is just a library then its a start for me.
Thanks
You could try wget to download all the images on a site. Doubt it's the best way to do this though. Chrome's Inspect Element function has information on the sizes of all images on a page, if that's more what you're looking for.
i'm trying to build a batch image downloader in chrome. Basically, i will overlay a small download square to each image on the page and user clicks on it to download. Or the user can click to download all images on a page. I'm currently stuck on figuring out how to download the images. The best i can come up with is to use XHR to send the image to another server, the user can then retrieve it there.
If anyone have a solution for me. It would be much appreciated!
Jason
I believe you can XHR the images and using the File API you can store them locally.
Take a look at the following site http://www.html5rocks.com/features/file there are additional resources on the right column that has detailed examples and tutorials. Such as http://www.html5rocks.com/tutorials/file/filesystem/
Mohamed Mansour
This code will do the trick for you: https://gist.github.com/1049553
It's very simple usage of a 'feature' in chrome when you open an image in a new tab.