Python + Mechanize - Emulate Javascript button click using POST? - python-3.x

I'm trying to automate filling a car insurance quote form on a site:
(following the same format as the site URL lets call it: "https://secure.examplesite.com/css/car/step1#noBack")
I'm stuck on the rego section as once the rego has been added, a button needs to be clicked to perform the search and it seems this is heavy Javascript and I know mechanizer can't handle this. I'm not versed in JavaScript, but I can see that when clicking the button a POST request is made to this URL: ("https://secure.examplesite.com/css/car/step1/searchVehicleByRegNo") Please see image also.
How can I emulate this POST request in Mechanize to run the javascript? So I can see the response / interact with the response? Or is this not possible? Can I consider bs4/requests/robobrowser instead. I'm only ~4 months into learning! Thanks
# Mechanize test
import mechanize
br = mechanize.Browser()
br.set_handle_robots(False) # ignore robots
br.set_handle_refresh(False) # can sometimes hang without this
res = br.open("https://secure.examplesite.com/css/car/step1#noBack")
br.select_form(id = "quoteCollectForm")
br.set_all_readonly(False) # allow everything to be written to
controlDict = {}
# List all form controls
for control in br.form.controls:
controlDict[control.name] = control.value
print("type = %s, name = %s, value = %s" %(control.type, control.name, control.value))
# Enter Rego etc "example"
br.form["vehicle.searchRegNo"] = "example"
# Now for control name = vehicle.searchRegNo, value = example
# BUT Now how do I click the button?? Simulate POST? The post url is formatted like:
# https://secure.examplesite.com/css/car/step1/searchVehicleByRegNo
Javascript POST

Solved my own problem-
Steps:
open dev tools in browser
Go to network tab and clear
interact with form element (in my case car rego finder)
click on the event that occurs from interaction
copy the exact URL, Request header data, and payload
I used Postman to quickly test the request and responses were correct / the same as the Webform and found the relevant headers
in postman convert to python requests code
Now I can interact completely with the form

Related

How to scrape image/file from web page in Python?

I try to use Python3.7.4 to backup pictures in a blog site, e.g.
http://s2.sinaimg.cn/mw690/001H6t4Fzy7zgC0WLXb01&690
If I input the above address in Firefox address bar, the file is shown correctly.
If I use following code to download picture, server always redirects to a default picture:
from requests import get # just to try different methods
from urllib.request import urlopen
from urllib.parse import urlsplit, urlunsplit, quote
# hard-coded address is randomly selected for debug purpose.
origPict = 'http://s2.sinaimg.cn/mw690/001H6t4Fzy7zgC0WLXb01&690'
p = urlsplit (origPict)
newP = quote (p.path)
origPict = urlunsplit ([p.scheme, p.netloc, newP, p.query, p.fragment])
try:
#url_file = urlopen(origPict)
#u = url_file.geturl ()
url_file = get (origPict)
u = url_file.url
if u != origPict:
raise Exception ('Failed to get picture ' + origPict)
...
Any clue why requests.get or urllib.urlopen don't like '&' in url?
Updates: Thanks for Artur's comments, I realize the question is not on request itself, but on site protection mechanism: js or cookies or something else in webpage feedback something to server to allow it to judge if request comes from scraper. So now the question turns to how to scrape image from web page which is more complex than simply download image from url.
It's not about & symbol, but about redirection. Try adding parameter allow_redirects=False to get, it should be okay

Python - Web scraping using HTML tags

I am trying to scrape a web-page to list out the jobs posted in URL: https://careers.microsoft.com/us/en/search-results?rk=l-hyderabad
Refer to image for details of web-page inspect Web inspect
Following is observed through a web-page inspect:
Each job listed, is in a HTML li with class="jobs-list-item". The Li contains following html tag & data in parent Div within li
data-ph-at-job-title-text="Software Engineer II",
data-ph-at-job-category-text="Engineering",
data-ph-at-job-post-date-text="2018-03-19T16:33:00".
1st Child Div within parent Div with class="information" has HTML with url
href="https://careers.microsoft.com/us/en/job/406138/Software-Engineer-II"
3rd child Div with class="description au-target" within parent Div has short job description
My requirement is to extract below information for each job
Job Title
Job Category
Job Post Date
Job Post Time
Job URL
Job Short Description
I have tried following Python code to scrape the webpage, but unable to extract the required information. (Please ignore the indentation shown in code below)
import requests
from bs4 import BeautifulSoup
def ms_jobs():
url = 'https://careers.microsoft.com/us/en/search-results?rk=l-hyderabad'
resp = requests.get(url)
if resp.status_code == 200:
print("Successfully opened the web page")
soup = BeautifulSoup(resp.text, 'html.parser')
print(soup)
else:
print("Error")
ms_jobs()
If you want to do this via requests you need to reverse engineer the site. Open the dev tools in Chrome, select the networks tab and fill out the form.
This will show you how the site loads the data. If you dig in the site you'll see, that it grabs the data by doing a POST to this endpoint: https://careers.microsoft.com/widgets. It also shows you the payload that the site uses. The site uses cookies so all you have to do is create a session that keeps the cookie, get one and copy/paste the payload.
This way you'll be able to extract the same json-data, that the javascript fetches to populate the site dynamically.
Below is a working example of what that would look like. Left is only to parse out the json as you see fit.
import requests
from pprint import pprint
# create a session to grab a cookie from the site
session = requests.Session()
r = session.get("https://careers.microsoft.com/us/en/")
# these params are the ones that the dev tools show that site sets when using the website form
payload = {
"lang":"en_us",
"deviceType":"desktop",
"country":"us",
"ddoKey":"refineSearch",
"sortBy":"",
"subsearch":"",
"from":0,
"jobs":"true",
"counts":"true",
"all_fields":["country","state","city","category","employmentType","requisitionRoleType","educationLevel"],
"pageName":"search-results",
"size":20,
"keywords":"",
"global":"true",
"selected_fields":{"city":["Hyderabad"],"country":["India"]},
"sort":"null",
"locationData":{}
}
# this is the endpoint the site uses to fetch json
url = "https://careers.microsoft.com/widgets"
r = session.post(url, json=payload)
data = r.json()
job_list = data['refineSearch']['data']['jobs']
# the job_list will hold 20 jobs (you can se the parameter in the payload to a higher number if you please - I tested 100, that returned 100 jobs
job = job_list[0]
pprint(job)
Cheers.

How to make multiple API calls from multiple pages in single URL

So the title is a little confusing I guess..
I have a script that I've been writing that will display some random data and other non-essentials when I open my shell. I'm using grequests to make my API calls since I'm using more than one URL. For my weather data, I use WeatherUnderground's API since it will offer active alerts. The alerts and conditions data are on separate pages. What I can't figure out is how to insert the appropriate name in the grequests object when it is making requests. Here is the code that I have:
URLS = ['http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
In the URL 'http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json' I need to make an API request to conditions and alerts to retrieve the data I need. How do I do this without rewriting a fourth URLS string?
I've tried
pages = ['conditions', 'alerts']
URL = ['http://api.wunderground.com/api/'+api_id+([p for p in pages])/q/autoip.json']
but, as I'm sure some of you more seasoned programmers know, threw and exception. So how can I iterate through these pages, or will I have to write out both complete URLS?
Thanks!
Ok I was actually able to figure out how to call each individual page within the grequests object by using a simple for loop. Here is the the code that I used to produced the expected results:
import grequests
pages = ['conditions', 'alerts']
api_id = 'myapikeyhere'
for p in pages:
URLS = ['http://api.wunderground.com/api/'+api_id+'/'+p+'/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
#create grequest object and retrieve results
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
I'm still not sure why I couldn't figure this out before.
Documentation for the grequests library here

Python Requests Refresh

I'm trying to use python's requests library to log in to a website. It's a pretty simple code, and you can really get the gist of requests just by going on its website. I, however, want to check if I'm successfully logged in via the url. The problem I've encountered is when I initiate the post requests and give it (the variable p) a url, whether the html has changed or not I'm still passed the same url when I type print(p.url). Is there any way for me to refresh the browser or update the url to whatever it's currently set at?
(I can add a line for checking the url against itself later, but for now I just want to get the correct url)
#!usr/bin/env python3
import requests
payload = {'login': 'USERNAME,
'password': 'PASSWORD'}
with requests.Session() as s:
p = s.post('WEBSITE', data=payload)
#print p.text
print(p.url)
The usuage of python-requests may not as complex as you think. It will automatically handle the redirect of your post ( or session.get()).
Here, session.post() method return a response object:
r = s.post('website', data=payload)
which means r.url is current url you are looking for.
If you still want to refresh current page, just use:
s.get(r.url)
To verify whether you has login successfully, one solution is to do the login in your browser.
Based on the title or content of the webpage returned (i.e, use the content in r.text), you can judge whether you have made it.
BTW, python-requests is a great library, enjoy it.

Python Requests: Use * as wildcard part of URL

Let's say I want to get a zip file (and then extract it) from a specific URL.
I want to be able to use a wildcard character in the URL like this:
https://www.urlgoeshere.domain/+*+-one.zip
instead of this:
https://www.urlgoeshere.domain/two-one.zip
Here's an example of the code I'm using (URL is contrived):
import requests, zipfile, io
year='2016'
monthnum='01'
month='Jan'
zip_file_url='https://www.urlgoeshere.domain/two-one.zip'
r = requests.get(zip_file_url, stream=True)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
Thanks in advance!
HTTP does not work that way. You must use the exact URL in order to request a page from the server.
I'm not sure if this helps you, but Flask has a feature that works similarly to what you require. Here's a working example:
#app.route('/categories/<int:category_id>')
def categoryDisplay(category_id):
''' Display a category's items
'''
# Get category and it's items via queries on session
category =session.query(Category).filter_by(id=category_id).one()
items = session.query(Item).filter_by(category_id=category.id)
# Display items using Flask HTML Templates
return render_template('category.html', category=category, items=items,
editItem=editItem, deleteItem=deleteItem, logged_in = check_logged_in())
the route decorator tells the web server to call that method when a url like */categories/(1/2/3/4/232...) is accessed. I'm not sure but I think you could do the same with the name of your zip as a String. See here (project.py) for more details.

Resources