Get geolocation from webpage by scraping using Selenium - python-3.x

I try to scrape this page :
The goal is to collect the latitude and the longitude. However I see that the HTML content don't change after any submit in "Adresse" case, and I don't know if this is the problem of my "empty list".
My script :
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
url = "https://www.coordonnees-gps.fr/"
chrome_path = r"C:\Users\jbourbon\Desktop\chromedriver_win32 (1)\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.maximize_window()
driver.get(url)
r = requests.get(url)
soup = BeautifulSoup(r.content, "html.parser")
g_data = soup.findAll("div", {"class": "col-md-9"})
latitude = []
longitude = []
for item in g_data:
latitude.append(item.contents[0].findAll("input", {"id": "latitude"}))
longitude.append(item.contents[0].findAll("input", {"id": "longitude"}))
print(latitude)
print(longitude)
And here, what I have with my list
Ty :)

There's nothing wrong with what you're doing, the problem is that when you open the link using selenium or requests the geolocation is not available instantly, it's available after few seconds (aaf9ec0.js adds it to the html dynamically, so request won't work anyway), also seems like input#latitude is also not giving the values, you can get it from div#info_window.
I've modified the code a bit, it should get you the lat, long, works for me:
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
import time
import re
url = "https://www.coordonnees-gps.fr/"
driver = webdriver.Chrome()
driver.maximize_window()
driver.get(url)
time.sleep(2) # wait for geo location to become ready
# no need to fetch it again using requests, we've already done it using selenium above
#r = requests.get(url)
soup = BeautifulSoup(driver.page_source, "html.parser")
g_data = soup.findAll("div", {"id": "info_window"})[0]
#extract latitude, longitutde
latitude, longitude = re.findall(r'Latitude :</strong> ([\.\d]*) \| <strong>Longitude :</strong> ([\.\d]*)<br/>', str(g_data))[0]
print(latitude)
print(longitude)
Output
25.594095
85.137565

My best guess is that you have to enable GeoLocation for the chromedriver instance. See this answer for a lead on how to do that.

Related

How to extract name and links from a given website - python

For below mentioned website, I am trying to find the name and its corresponding link from that site. But not able to pass/get the data at all.
Using BeautifulSoup
from bs4 import BeautifulSoup
import requests
source = requests.get('https://mommypoppins.com/events/115/los-angeles/all/tag/all/age/all/all/deals/0/near/0/0')
soup = BeautifulSoup(source.text, 'html.parser')
mains = soup.find_all("div", {"class": "list-container-wrapper"})
name = []
lnks = []
for main in mains:
name.append(main.find("a").text)
lnks.append(main.find("a").get('href'))
Using Selenium webdriver
from selenium import webdriver
driver = webdriver.Chrome(executable_path=r"chromedriver_win32\chromedriver.exe")
driver.get("https://mommypoppins.com/events/115/los-angeles/all/tag/all/age/all/all/deals/0/near/0/0")
lnks = []
name = []
for a in driver.find_elements_by_class_name('ng-star-inserted'):
link = a.get_attribute('href')
lnks.append(link)
nm = driver.find_element_by_css_selector("#list-item-0 > div > h2 > a").text
name.append(nm)
I have tried with both 2 above methods.
Example:
name = ['Friday Night Flicks Drive-In at the Roadium', 'Open: Butterfly Pavilion and Nature Gardens']
lnks = ['https://mommypoppins.com/los-angeles-kids/event/in-person/friday-night-flicks-drive-in-at-the-roadium','https://mommypoppins.com/los-angeles-kids/event/in-person/open-butterfly-pavilion-and-nature-gardens']
Here's solution for webdriver:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://mommypoppins.com/events/115/los-angeles/all/tag/all/age/all/all/deals/0/near/0/0')
time.sleep(3)
elements = driver.find_elements(By.XPATH, "//a[#angularticsaction='expanded-detail']")
attributes = [{el.text: el.get_attribute('href')} for el in elements]
print(attributes)
print(len(attributes))
driver.quit()
Here's solution with webdriver and bs4:
import time
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get('https://mommypoppins.com/events/115/los-angeles/all/tag/all/age/all/all/deals/0/near/0/0')
time.sleep(3)
soup = BeautifulSoup(driver.page_source, 'html.parser')
mains = soup.find_all("a", {"angularticsaction": "expanded-detail"})
attributes = [{el.text: el.get('href')} for el in mains]
print(attributes)
print(len(attributes))
driver.quit()
Here's solution with requests:
import requests
url = "https://mommypoppins.com"
response = requests.get(f"{url}/contentasjson/custom_data/events_ng-block_1x/0/115/all/all/all/all/all").json()
attributes = [{r.get('node_title'): f"{url}{r['node'][r['nid']]['node_url']}"} for r in response['results']]
print(attributes)
print(len(attributes))
cheers!
The website is loaded dynamically, therefore requests won't support it. However, the data is available in JSON format via sending a GET request to:
https://mommypoppins.com/contentasjson/custom_data/events_ng-block_1x/0/115/all/all/all/all/all.
There's no need for BeautifulSoup or Selenium, using merely requests would work, which will make your code much faster.
import requests
URL = "https://mommypoppins.com/contentasjson/custom_data/events_ng-block_1x/0/115/all/all/all/all/all"
BASE_URL = "https://mommypoppins.com"
response = requests.get(URL).json()
names = []
links = []
for json_data in response["results"]:
data = json_data["node"][json_data["nid"]]
names.append(data["title"])
links.append(BASE_URL + data["node_url"])

How to scrape links from faceit

I am trying to scrape code from a faceit room, this is what i've tried but it doesn't work.
Any help is much appreciated!
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.faceit.com/en/csgo/room/1-8d6729b5-cfeb-4059-8894-3b07e04e76b2')
soup = BeautifulSoup(r.content, 'html.parser')
extracted_link = soup.find_all('href', class_='list-unstyled')
print(extracted_link)
Example Link : https://www.faceit.com/en/csgo/room/1-8d6729b5-cfeb-4059-8894-3b07e04e76b2
Example Link Extracted : https://demos-europe-west2.faceit-cdn.net/csgo/f9eadb47-aea5-4672-9499-4f457c7d28bd.dem.gz
Example : https://paste.pics/AQBQY
All of the contents of the page is loaded dynamically, which means that BeautifulSoup won't see it. So you actually might be better off using selenium with a webdriver in headless mode.
For example:
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
url = "https://www.faceit.com/en/csgo/room/1-8d6729b5-cfeb-4059-8894-3b07e04e76b2"
driver.get(url)
time.sleep(2)
element = driver.find_element_by_css_selector('.match-vs .btn-default')
print(element.get_attribute("href"))
Output:
https://demos-europe-west2.faceit-cdn.net/csgo/f9eadb47-aea5-4672-9499-4f457c7d28bd.dem.gz
You can also do it using only requests:
import requests as r
room_id = '1-8d6729b5-cfeb-4059-8894-3b07e04e76b2'
link = 'https://api.faceit.com/match/v2/match/'+room_id
res = r.get(link)
data = res.json()
extracted_links = data['payload']['demoURLs']
print(extracted_links)
The code probes their API to get all the data at once as JSON, then just extract the needed information.

I am trying to extract text inside span_id, but getting blank output using python beautifulsoup

i am tring to extract text inside span-id tag but getting blank output screen.
i have tried using parent element div text also , but fail to extract, please anyone help me.
below is my code.
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.paperplatemakingmachines.com/')
soup = BeautifulSoup(r.text,'lxml')
mob = soup.find('span',{"id":"tollfree"})
print(mob.text)
i want the text inside that span which is given mobile number.
You'll have to use Selenium as that text is not present in the initial request, or at least no without searching through <script> tags.
from bs4 import BeautifulSoup as soup
from selenium import webdriver
import time
driver = webdriver.Chrome('C:\chromedriver_win32\chromedriver.exe')
url='https://www.paperplatemakingmachines.com/'
driver.get(url)
# It's better to use Selenium's WebDriverWait, but I'm still learning how to use that correctly
time.sleep(5)
soup = BeautifulSoup(driver.page_source, 'html.parser')
driver.close()
mob = soup.find('span',{"id":"tollfree"})
print(mob.text)
The Data is actually rending dynamically through script. What you need to do is parse the data from script:
import requests
import re
from bs4 import BeautifulSoup
r = requests.get('https://www.paperplatemakingmachines.com/')
soup = BeautifulSoup(r.text,'lxml')
script= soup.find('script')
mob = re.search("(?<=pns_no = \")(.*)(?=\";)", script.text).group()
print(mob)
Another way of using regex to find the number
import requests
import re
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.paperplatemakingmachines.com/',)
soup = bs(r.content, 'lxml')
r = re.compile(r'var pns_no = "(\d+)"')
data = soup.find('script', text=r).text
script = r.findall(data)[0]
print('+91-' + script)

Scrape image's metadata from Facebook public posts

This is a follow-up question in my quest to get some data from Facebook public posts. I'm trying to collect images metadata this time (image's url). Link posts work fine but some posts return empty data. I used the same approach suggested in answers to my previous question but it doesn't work for the example below. Will appreciate suggestions!
link = "https://www.facebook.com/228735667216/posts/10151653129902217"
res = requests.get(link,headers={'User-Agent':'Mozilla/5.0'})
comment = res.text.replace("-->", "").replace("<!--", "")
soup = BeautifulSoup(comment, "lxml")
image = soup.find("div", class_="uiScaledImageContainer _517g")
img = image.find("img", class_="scaledImageFitWidth img")
href = img["src"]
print(href)
To log in using requests is not that easy so I intentionally skipped that library. You can try using only selenium or selenium in combination with BeautifulSoup to do the doing.
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.common.keys import Keys
url = "https://www.facebook.com/228735667216/posts/10156284868312217"
chrome_options = webdriver.ChromeOptions()
#This is how you can make the browser headless
chrome_options.add_argument("--headless")
#The following line controls the notification popping up right after login
prefs = {"profile.default_content_setting_values.notifications" : 2}
chrome_options.add_experimental_option("prefs",prefs)
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get(url)
driver.find_element_by_id("email").send_keys("your_username")
driver.find_element_by_id("pass").send_keys("your_password",Keys.RETURN)
driver.get(url)
soup = BeautifulSoup(driver.page_source, "lxml")
for img in soup.find_all(class_="scaledImageFitWidth"):
print(img.get("src"))
driver.quit()
Output are like (partial):
https://external.fdac17-1.fna.fbcdn.net/safe_image.php?d=AQBjBuP0TBYabtnO&w=540&h=282&url=https%3A%2F%2Fs3.amazonaws.com%2Fprod-cust-photo-posts-jfaikqealaka%2F3065-6e4c325b07b921fdefed4dd727881f8d.jpg&cfs=1&upscale=1&fallback=news_d_placeholder_publisher&_nc_hash=AQCVKXMSqvNiHZik
https://external.fdac17-1.fna.fbcdn.net/safe_image.php?d=AQCJ6RFOF4dY2xTn&w=100&h=100&url=https%3A%2F%2Fcdn.images.express.co.uk%2Fimg%2Fdynamic%2F106%2F750x445%2F1046936.jpg&cfs=1&upscale=1&fallback=news_d_placeholder_publisher_square&_nc_hash=AQAyFxRaZTGV47Se

Not able to use BeautifulSoup to get span content of Nasdaq100 future

from bs4
import BeautifulSoup
import re
import requests
url = 'www.barchart.com/futures/quotes/NQU18'
r = requests.get("https://" +url)
data = r.text
soup = BeautifulSoup(data)
price = soup.find('span', {'class': 'last-change',
'data-ng-class': "highlightValue('priceChange’)”}).text
print(price)
Result:
[[ item.priceChange ]]
It is not the span content. The result should be price. Where am I going wrong?
The following is the span tag of the page:
2nd screenshot: How can I get the time?
Use price = soup.find('span', {'class': 'up'}).text instead to get the +X.XX value:
from bs4 import BeautifulSoup
import requests
url = 'www.barchart.com/futures/quotes/NQU18'
r = requests.get("https://" +url)
data = r.text
soup = BeautifulSoup(data, "lxml")
price = soup.find('span', {'class': 'up'}).text
print(price)
Output currently is:
+74.75
The tradeTime you seek seems to not be present in the page_source, since it's dynamically generated through JavaScript. You can, however, find it elsewhere if you're a little clever, and use the json library to parse the JSON data from a certain script element:
import json
trade_time = soup.find('script', {"id": 'barchart-www-inline-data'}).text
json_data = json.loads(trade_time)
print(json_data["NQU18"]["quote"]["tradeTime"])
This outputs:
2018-06-14T18:14:05
If these don't solve your problem then you will have to resort to something like Selenium that can run JavaScript to get what you're looking for:
from selenium import webdriver
driver = webdriver.Chrome()
url = ("https://www.barchart.com/futures/quotes/NQU18")
driver.get(url)
result = driver.find_element_by_xpath('//*[#id="main-content-column"]/div/div[1]/div[2]/span[2]/span[1]')
print(result.text)
Currently the output is:
-13.00

Resources