How to download images from "wikimedia search result" using wget? - linux

I need to mirror every images which appear on this page:
http://commons.wikimedia.org/w/index.php?title=Special:Search&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1&redirs=0&search=buitenzorg&limit=900&offset=0
The mirror result should give us the full size images, not the thumbnails.
What is the best way to do this with wget?
UPDATE:
I update the solution below.

Regex is your friend my friend!
Using cat, egrep and wget youll get this task done pretty fast
Download the search results URI wget, then run
cat DownloadedSearchResults.html | egrep (?<=class="searchResultImage".+href=").+?\.jpg/
That should give you http://commons.wikimedia.org/ based links to each of the image's web page. Now, for each one of those results, download it and run:
cat DownloadedSearchResult.jpg | egrep (?<=class="fullImageLink".*href=").+?\.jpg
That should give you a direct link to the highest resolution available for that image.
Im hoping your bash knowledge will do the rest. Good Luck.

Came here with the same problem .. found this >> http://meta.wikimedia.org/wiki/Wikix
I don't have access to a linux machine now, so I didn't try it yet.

It is quite difficult to write all the script in stackoverflow editor, you can find the script at the address below. The script only downloads all images at the first page, you can modify it to automate download process in another page.
http://pastebin.com/xuPaqxKW

Related

Is os.system() the best way to wget a group of files within a Python script?

I'd like to download a bunch of files hosted and password-protected at a url onto a directory within a Python script. The vision is that I'd one day be able to use joblib or something to download each file in parallel, but for now, I'm just focusing on the wget command.
Right now, I can download a single file using:
import os
os.system("wget --user myUser --password myPassword --no-parent -nH --recursive -A gz,pdf,bam,vcf,csv,txt,zip,html https://url/to/file")
However, there are some issues with this - for example, there isn't a record of how the download is proceeding - I only know it is working because I can see the file appear on my directory.
Does anyone have suggestions for how I can improve this, especially in light of the fact that I'd one day like to download many files in parallel, and then go back to see which ones failed?
Thanks for your help!
There are some good libraries to download files via HTTP natively in Python, rather than launching external programs. A very popular one which is powerful yet easy to use is called Requests: https://requests.readthedocs.io/en/master/
You'll have to implement certain features like --recursive yourself if you need those (though your example is confusing because you use --recursive but say you're downloading one file). See for example recursive image download with requests .
If you need a progress bar you can use another library called tqdm in conjunction with Requests. See Python progress bar and downloads .
If the files you're downloading are large, here is an answer I wrote showing how to get the best performance (as fast as wget): https://stackoverflow.com/a/39217788/4323 .

Is there a Linux command line utility for getting random data to work with from the web?

I am a Linux newbie and I often find myself working with a bunch of random data.
For example: I would like to work on a sample text file to try out some regular expressions or read some data into gnuplot from some sample data in a csv file or something.
I normally do this by copying and pasting passages from the internet but I was wondering if there exists some combination of commands that would allow me to do this without having to leave the terminal. I was thinking about using something like the curl command but I dont exactly know how it works...
To my knowledge there are websites that host content. I would simply like to access them and store them in my computer.
In conclusion and as a concrete example, how would i copy and paste a random passage off the internet from a website and store it in a file in my system using only the command line? Maybe you can point me in the right direction. Thanks.
You could redirect the output of a curl command into a file e.g.
curl https://run.mocky.io/v3/5f03b1ef-783f-439d-b8c5-bc5ad906cb14 > data-output
Note that I've mocked data in Mocky which is a nice website for quickly mocking an API.
I normally use "Project Gutenberg" which has 60,000+ books freely downloadable online.
So, if I want the full text of "Peter Pan and Wendy" by J.M. Barrie, I'd do:
curl "http://www.gutenberg.org/files/16/16-0.txt" > PeterPan.txt
If you look at the page for that book, you can see how to get it as HTML, plain text, ePUB or UTF-8.

How to grab different sort of images from it's src using wget?

This is an example image src. I want to save this image using wget. How to do that?
http://lp.hm.com/hmprod?set=key[source],value[/environment/2013/2BV_0002_007R.jpg]&set=key[rotate],value[-0.1]&set=key[width],value[3694]&set=key[height],value[4319]&set=key[x],value[248]&set=key[y],value[354]&set=key[type],value[FASHION_FRONT]&hmver=0&call=url[file:/product/large]
wget -L "http://lp.hm.com/hmprod?set=key[source],value[/environment/2013/2BV_0002_007R.jpg]&set=key[rotate],value[-0.1]&set=key[width],value[3694]&set=key[height],value[4319]&set=key[x],value[248]&set=key[y],value[354]&set=key[type],value[FASHION_FRONT]&hmver=0&call=url[file:/product/large]" -O zz.jpg
Providing quotes to your link to be downloaded is very essential. This link in particular has many special character capable of screwing things up.

Trying to extract field from browser page

I'm trying to extract one field from Firefox to my local Ubuntu 12.04 PC and Mac OS 19.7.4 from an online form
I can manually save the page locally as a text document and then search for the text using Unix script but this seems rather cumbersome & I require it to be automated. Is there another more efficient method?
My background is on Macs but the company is trialling Linux PC's, so please be tolerant of my relevant Ubuntu ignorance.
If you mean to program something try
WWW:Mechanize library, it have python and perl bindings,
several mousescripting engines in lunux, (actionaz)
test automation tool which works with firefox (Selenium)
You can do it by simple BASH script.
Take a look at some useful stuff like:
wget
sed
grep
and then nothing will by cumbersome and everything can go automatic.
If you want to go with the method that you mentioned, you can use curl to automate the saving of the form. Your BASH script would then look something like this:
curl http://locationofonlineform.com -o tempfile
valueOfField=$(grep patternToFindField tempfile)
// Do stuff
echo $valueOfField
If you want to get rid of the temporary file, you can directly feed the result of curl into the grep command.

generate image (e.g. jpg) of a web page?

I want to create an image what a web page looks like,
e.g. create a small thumbnail of the html + images.
it does not have to be perfect (e.g. flash /javascript rendering).
I will call use the code on linux, ideally would be some java library, but a command line tool would be cool as well.
any ideas?
Try CutyCapt, a command-line utility. It uses Webkit for rendering and outputs in various formats (SVG, PNG, etc.).
you can get it nearly perfect, and cross platform too, by using a browser plugin.
FireShot or ScreenGrab for Firefox.
6 Google Chrome Screenshot Webpage Capture Extensions
BrowserShots is an open source project that may have some code you can use.
also see:
Command line program to create website screenshots (on Linux)
Convert web page to image
How to take screenshot of whole web page, rather than what shows on the screen
What is the best way to create a web page thumbnail?
Convert HTML to an image
To take a screenshot in the terminal with ImageMagick, type the following line into a terminal and then click-and-drag the mouse over a section of the screen:
import MyScreenshot.png
To capture the entire screen and after some delay and resize it, use the following command:
import -window root -resize 400×300 -delay 200 screenshot.png
You may use a mixture of xwininfo and import to retrieve the window id of the browser and make a screenshot of that window. A bash script to automate this process would be something like this:
#!/bin/bash
window_id=`xwininfo -tree -root | grep Mozilla | awk '{print $1}'`
import -window $window_id -resize 100x100 tumb.png
This script will create a 100x100 screenshot of Firefox on the current directory under the name tumb.png
Several sources show how to run a bash script from inside a Java application, google can help you on that. If you are in a hurry, check this and this.
After reading this page, I was thinking, let me fire up midori browser: http://midori-browser.org/ and when I tried the -h option, I have seen:
-s, --snapshot Take a snapshot of the specified URI
QutyCapt is difficult to compile, and has many dependencies. Midori has it less. It outputs the PNG of the website into TMP folder. One can get the file with:
midori -s http://www.rcdwealth.com new.png 2>/dev/null | awk '{ print $4}'
After that, the file can be converted to thumbnail size by using ImageMagick's convert program.
If you're interested in Java, maybe you could look at browser automation using Selenium-RC http://seleniumhq.com
It's a little java server that you can install on the box and the program itself will execute remote commands in a web browser.
Steps like (this is pseudo code by the way, I code my Selenium in php and I can't recall 100% of the specifics off the top of my head)
selenium.location("http://foo.com")
selenium.open("/folder/sub/bar.html")
selenium.captureScreenshot("/tmp/" + this.getClass().getName() + "."
+ testMethodName + ".png");
Actually, I just did a quick websearch for the exact syntax on that last one ... and this guy has a blog with what might actually be working code in java :)
https://dev.youdevise.com/YDBlog/index.php?title=capture_screenshots_of_selenium_browser_&more=1&c=1&tb=1&pb=1
There's also a number of websites that provide this service "cross browser and OS" I just can't recall what they are. Basically they've got a cloud of every single operating system and browser combination, and they log on with each machine, take a screen and store it on their site for you to come back to in a few hours when they're done.
Ahh... another websearch and it's yours :) http://browsershots.org/

Resources