Unable to use the site search function - python-3.x

I am trying to use the built-in search function from the site but I keep getting results from the main page. Not sure what I am doing wrong.
import requests
from bs4 import BeautifulSoup
body = {'input':'ferris'} # <-- also have tried'query'
con = requests.post('http://www.collegedata.com/', data=body)
soup = BeautifulSoup(con.content, 'html.parser')
products = soup.findAll('div', {'class': 'schoolCityCol'})
print(soup)
print (products)

You have 2 issues in your code:
POST url is incorrect. You should correct this:
con = session.post('http://www.collegedata.com/cs/search/college/college_search_tmpl.jhtml', data=body)
Your POST data is incorrect too.
body = {'method':'submit', 'collegeName':'ferris', 'searchType':'1'}
You can use Developer tools in any browser (Chrome preferably) and check POST url and data on page Network.

Related

Extract a particular link present in each of the considered web pages

I'm having trouble extracting a particular link from each of the web pages I'm considering.
In particular, considering for example the following websites:
https://lefooding.com/en/restaurants/ezkia
https://lefooding.com/en/restaurants/tekes
I would like to know if there is a unique way to extract the field WEBSITE (above the map) shown in the table on the left of the page.
For the reported cases, I would like to extract the links:
https://www.ezkia-restaurant.fr/
https://www.tekesrestaurant.com/
There are no unique tags to refer to and this makes extraction difficult.
I've thought of a solution using the selector, but it doesn't seem to work. For the first link I have:
from bs4 import BeautifulSoup
import requests
url = "https://lefooding.com/en/restaurants/ezkia"
res = requests.get(url)
soup = BeautifulSoup(res.text, 'html.parser')
data = soup.find("div", {"class": "e-rowContent"})
print(data)
but there is no trace of the link I need here. Does anyone know of a possible solution?
Try this:
import requests
from bs4 import BeautifulSoup
urls = [
"https://lefooding.com/en/restaurants/ezkia",
"https://lefooding.com/en/restaurants/tekes",
]
with requests.Session() as s:
for url in urls:
soup = [
link.strip() for link
in BeautifulSoup(
s.get(url).text, "lxml"
).select(".pageGuide__infos a")[-1]
]
print(soup)
Output:
['https://www.ezkia-restaurant.fr']
['https://www.tekesrestaurant.com/']

Request.get not rendering all 'hrefs' in HTML Python

I am trying to fetch the "Contact Us" page of multiple websites. It works for some of the websites, but for some, the text rendered by request.get does not contain all the 'href" links. When i inspect the page in browser, it is visible but not coming through in requests.
Tried to look for the solution , but to no luck:-
Below is the code and the webpage i am trying to scrape https://portcullis.co/ :-
headers = {"Accept-Language": "en-US, en;q=0.5"}
def page_contact(url):
r = requests.get(url, headers = headers)
txt = BeautifulSoup(r.text, 'html.parser')
links = []
for link in txt.findAll('a'):
links.append(link.get('href'))
return r, links
The output generated is :-
<Response [200]> []
Since it is working fine for some other websites, i would prefer to edit it in a way where it doesn't just cater to this website, but to all websites,
Any help is highly appreciated !!
Thanks !!!
This is another way to solve this using only selenium and not BeautifulSoup
browser = selenium.webdriver.Chrome(chrome.exe)
browser.get(url)
browser.set_page_load_timeout(100)
time.sleep(3)
WebDriverWait(browser, 20).until(lambda d: d.find_element_by_tag_name("a"))
time.sleep(20)
elements = browser.find_elements_by_xpath("//a[contains(translate(., 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz') , 'contact')]")
for el in elements:
final_link.append(el.get_attribute("href"))
This would fetch you the source page info, and you can find the relevant links by passing it to beautifulsoup
from selenium import webdriver
import time
browser = webdriver.Chrome(r'path to your chrome exe')
browser.get('Your url')
time.sleep(5)
htmlSource = browser.page_source
txt = BeautifulSoup(htmlSource, 'html.parser')
browser.close()
links = []
for link in txt.findAll('a'):
links.append(link.get('href'))

How to fix "businessObject not defined"

I am a newbie to Python and web scraping. To practice, I am just trying to pull some business names from some HTML tags a website. However, the code is not running and is throwing an 'object is not defined' error.
from bs4 import BeautifulSoup
import requests
url = 'https://marketplace.akc.org/groomers/?location=Michigan&page=1'
response = requests.get(url, timeout = 5)
content = BeautifulSoup(response.content, "html.parser")
for business in content.find_all('div', attrs={"class": "groomer-salon-card__details"}):
businessObject = {
"BusinessName": business.find('h4', attrs={"class": "groomer-salon-card__name"}).text.encode('utf-8')
}
print (businessObject)
Expected: I am trying to retrieve the business names from this web page.
Result:
NameError: name 'businessObject' is not defined
When you did
content.find_all('div', attrs={"class": "groomer-salon-card__details"})
you actually got an empty list as no match.
So, when you did
for business in content.find_all('div', attrs={"class": "groomer-salon-card__details"}):
you didn't generate
businessObject
As mentioned in comments, that led to your error.
Content is dynamically loaded from elswhere in the DOM using javascript (as well as other DOM modifications). You can still regex out the javascript object which contains the content used to update the DOM as you saw it in browser. You then parse with json parser as follows:
import requests, re, json
url = 'https://marketplace.akc.org/groomers/?location=Michigan&page=1'
response = requests.get(url, timeout = 5)
p = re.compile(r'state: (.*?)\n', re.DOTALL)
data = json.loads(p.findall(response.text)[0])
for listing in data['content']['search_results']['pages']['data']:
print(listing['organization_name'])
If you view page source on webpage you will see that the DOM is essentially dynamically populated from top to bottom with mutation observers monitoring progress.

how to read links from a list with beautifulsoup?

I have a list with lots of links and I want to scrape them with beautifulsoup in Python 3
links is my list and it contains hundreds of urls. I have tried this code to scrape them all, but it's not working for some reason
links= ['http://www.nuforc.org/webreports/ndxe201904.html',
'http://www.nuforc.org/webreports/ndxe201903.html',
'http://www.nuforc.org/webreports/ndxe201902.html',
'http://www.nuforc.org/webreports/ndxe201901.html',
'http://www.nuforc.org/webreports/ndxe201812.html',
'http://www.nuforc.org/webreports/ndxe201811.html',...]
raw = urlopen(i in links).read()
ufos_doc = BeautifulSoup(raw, "html.parser")
raw should be a list containing the data of each web-page. For each entry in raw, parse it and create a soup object. You can store each soup object in a list (I called it soups):
links= ['http://www.nuforc.org/webreports/ndxe201904.html',
'http://www.nuforc.org/webreports/ndxe201903.html',
'http://www.nuforc.org/webreports/ndxe201902.html',
'http://www.nuforc.org/webreports/ndxe201901.html',
'http://www.nuforc.org/webreports/ndxe201812.html',
'http://www.nuforc.org/webreports/ndxe201811.html']
raw = [urlopen(i).read() for i in links]
soups = []
for page in raw:
soups.append(BeautifulSoup(page,'html.parser'))
You can then access eg. the soup object for the first link with soups[0].
Also, for fetching the response of each URL, consider using the requests module instead of urllib. See this post.
You need a Loop over the list links. If you have a lot of these to do, as mentioned in other answer, consider requests. With requests you can create a Session object which will allow you to re-use connection thereby more efficiently scraping
import requests
from bs4 import BeautifulSoup as bs
links= ['http://www.nuforc.org/webreports/ndxe201904.html',
'http://www.nuforc.org/webreports/ndxe201903.html',
'http://www.nuforc.org/webreports/ndxe201902.html',
'http://www.nuforc.org/webreports/ndxe201901.html',
'http://www.nuforc.org/webreports/ndxe201812.html',
'http://www.nuforc.org/webreports/ndxe201811.html']
with requests.Session as s:
for link in links:
r = s.get(link)
soup = bs(r.content, 'lxml')
#do something

Web Crawler keeps saying no attribute even though it really has

I have been developing a web-crawler for this website (http://www.bobaedream.co.kr/cyber/CyberCar.php?gubun=I&page=1). But I have a trouble at crawling each title of the stock. I am pretty sure that there is attribute for carinfo_title = carinfo.find_all('a', class_='title').
Please check out the attached code and website code, and then give me any advice.
Thanks.
(Website Code)
https://drive.google.com/open?id=0BxKswko3bYpuRV9seTZZT3REak0
(My code)
from bs4 import BeautifulSoup
import urllib.request
target_url = "http://www.bobaedream.co.kr/cyber/CyberCar.php?gubun=I&page=1"
def fetch_post_list():
URL = target_url
res = urllib.request.urlopen(URL)
html = res.read()
soup = BeautifulSoup(html, 'html.parser')
table = soup.find('table', class_='cyber')
#Car Info and Link
carinfo = table.find_all('td', class_='carinfo')
carinfo_title = carinfo.find_all('a', class_='title')
print (carinfo_title)
return carinfo_title
fetch_post_list()
You have multiple elements with the carinfo class and for every "carinfo" you need to get to the car title. Loop over the result of the table.find_all('td', class_='carinfo'):
for carinfo in table.find_all('td', class_='carinfo'):
carinfo_title = carinfo.find('a', class_='title')
print(carinfo_title.get_text())
Would print:
미니 쿠퍼 S JCW
지프 랭글러 3.8 애니버서리 70주년 에디션
...
벤츠 뉴 SLK200 블루이피션시
포르쉐 뉴 카이엔 4.8 GTS
마쯔다 MPV 2.3
Note that if you need only car titles, you can simplify it down to a single line:
print([elm.get_text() for elm in soup.select('table.cyber td.carinfo a.title')])
where the string inside the .select() method is a CSS selector.

Resources