This is my first question so it may be quite basic.
I've managed to id and select the element but I cannot extract especific values like "IDinmobiliarias" from it.
data = soup.select('#PropJSON')
print(data)
When I do this, I get this output:
[<input id="PropJSON" type="hidden" value='{"id":"186226916","IDinmobiliarias":"108","IDoperaciones":"1","tipoPropiedad":"2","IDdepartamentos":"10","IDzonas":"13","IDpais":"1","refered":1,"particular":"0","temporario":0,"proyecto":0,"destaque":1,"IDmoneda":"1","monto":"1595000","precio_en_usd":1595000,"monedaISO":"USD"}'/>]
How can I extract the "108" for example?
I've tried different things without success.
select will return to you a list. You can then iterate over that list and get the data of the value attribute by accessing it like a dictionary. Once you have the data you will need to parse it with json then you can select any element you like from it.
from bs4 import BeautifulSoup
import json
html = """<input id="PropJSON" type="hidden" value='{"id":"186226916","IDinmobiliarias":"108","IDoperaciones":"1","tipoPropiedad":"2","IDdepartamentos":"10","IDzonas":"13","IDpais":"1","refered":1,"particular":"0","temporario":0,"proyecto":0,"destaque":1,"IDmoneda":"1","monto":"1595000","precio_en_usd":1595000,"monedaISO":"USD"}'/>"""
soup = BeautifulSoup(html, features="lxml")
data = soup.select('#PropJSON')
for input_tag in data:
json_string = json.loads(input_tag['value'])
print(json_string['IDinmobiliarias'])
OUTPUT
108
Related
I am using BeautifulSoup to parse a webpage. Now I would like to read the Index value 31811.75 from the span:
<span>Underlying Index: <b style="font-size:1.2em;">BANKNIFTY 31811.75</b> </span>
Unfortunately the span lacks any other identifies such as class. I followed the solutions mentioned on a similar question, but I don't seem to get the whole text:
>>> print(soup.body(text=re.compile('Underlying')))
['Underlying Index: ']
I would like the used the keyword Underlying to extract the text present in the span. How can I do this?
Created a synthetic HTML document that has a span that we don't want to find. Extract decimal from found text using re.findall()
from bs4 import BeautifulSoup
import re
html = """
<html><body>
<span>unwanted</span>
<span>Underlying Index: <b style="font-size:1.2em;">BANKNIFTY 31811.75</b> </span>
</html></body>
"""
soup = BeautifulSoup(html)
index = re.findall("\d+\.\d+", soup.find(lambda tag:tag.name=="span" and "Underlying" in tag.text).text )
index[0] if len(index)==1 else None # re.findall() returns a list, take first located decimal. Could default to 0.0 instead of None
output
'31811.75'
I am trying to print out different things from a Norwegian weather site with beautifulsoup.
I manage to print out everything i want except one thing witch mentions how the weather will be the next hour.
This contains the text i want to get:
<span class="nowcast-description" data-reactid="59">har opphold nå, det holder seg tørt den neste timen</span>
And i am trying print it with this:
cond = soup.find(class_='nowcast-description').get_text()
Inspected elements from storm.no/ski
Here is a picture of the some of the elements on the site.
with printing these:
soup = bs4.BeautifulSoup(html, "html.parser")
loc = soup.find(class_='info-text').get_text()
cond = soup.find(class_='nowcast-description').get_text()
temp = soup.find(class_='temperature').get_text()
wind = soup.find(class_='indicator wind').get_text()
also tested with this line:
cond = soup.select("span.nowcast-description")
but that gives me everything except what i want from the line.
Site link: https://www.storm.no/ski
i get:
Ski Akershus, 131 moh.
""
2°
3 m/s
It is retrieved dynamically from a script tag. You can regex out object containing all forecasts and handle with hjson library due to unquoted keys. You need to install hjson then do the following:
import requests, hjson, re
headers = {'User-Agent':'Mozilla/5.0'}
r = requests.get('https://www.storm.no/ski')
p = re.compile(r'window\.__dehydratedState = (.*?);', re.DOTALL)
data = hjson.loads(p.findall(r.text)[0])
print(data['app-container']['current']['forecast']['nowcastDescription'])
You could regex out with library direct as well but using hsjon means you have access to all the other data.
It's because text under nowcast-description is generated dynamically. If you will dump the loaded page:
print(soup.prettify())
You only find only this:
<span class="nowcast-description" data-reactid="59">
</span>
On rough analysis, it seems that the content of this span is loaded from field nowcastDescription which is a part of window.__dehydratedState .
Because the field is a simple json, you can try to extract it from it.
i ues this code
import urllib.request
fp = urllib.request.urlopen("https://english-thai-dictionary.com/dictionary/?sa=all")
mybytes = fp.read()
mystr = mybytes.decode("utf8")
fp.close()
print(mystr)
x = 'alt'
for item in mystr.split():
if (x) in item:
print(item.strip())
I get Thai word from this code but I didn't know how to get Eng word.Thanks
If you want to get words from table you should use parsing library like BeautifulSoup4. Here is an example how you can parse this (I'm using requests to fetch and beautifulsoup here to parse data):
First using dev tools in your browser identify table with content you want to parse. Table with translations has servicesT class attribute which occurs only once in whole document:
import requests
from bs4 import BeautifulSoup
url = 'https://english-thai-dictionary.com/dictionary/?sa=all;ftlang=then'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
# Get table with translations
table = soup.find('table', {'class':'servicesT'})
After that you need to get all rows that contain translations for Thai words. If you look up page's source file you will notice that first few <tr rows are headers that contain only headers so we will omit them. After that we wil get all <td> elements from row (in that table there are always 3 <td> elements) and fetch words from them (in this table words are actually nested in and ).
table_rows = table.findAll('tr')
# We will skip first 3 rows beacause those are not
# contain information we need
for tr in table_rows[3:]:
# Finding all <td> elements
row_columns = tr.findAll('td')
if len(row_columns) >= 2:
# Get tag with Thai word
thai_word_tag = row_columns[0].select_one('span > a')
# Get tag with English word
english_word_tag = row_columns[1].find('span')
if thai_word_tag:
thai_word = thai_word_tag.text
if english_word_tag:
english_word = english_word_tag.text
# Printing our fetched words
print((thai_word, english_word))
Of course, this is very basic example of what I managed to parse from page and you should decide for yourself what you want to scrape. I've also noticed that data inside table does not have translations all the time so you should keep that in mind when scraping data. You also can use Requests-HTML library to parse data (it supports pagination which is present in table on page you want to scrape).
I've written a script to parse data from the first table of a website. I've used xpath to parse the table. Btw, I didn't use "tr" tag cause without using it I can still see the results in the console when printed. When I run my script, the data are getting scraped but being printed in a single line in a csv file. I can't find out the mistake I'm making. Any input on this will be highly appreciated. Here is what I've tried with:
import csv
import requests
from lxml import html
url="https://fantasy.premierleague.com/player-list/"
response = requests.get(url).text
outfile=open('Data_tab.csv','w', newline='')
writer=csv.writer(outfile)
writer.writerow(["Player","Team","Points","Cost"])
tree = html.fromstring(response)
for titles in tree.xpath("//table[#class='ism-table']")[0]:
# tab_r = titles.xpath('.//tr/text()')
tab_d = titles.xpath('.//td/text()')
writer.writerow(tab_d)
You might want to add a level of looping, examining each table row in turn.
Try this:
for titles in tree.xpath("//table[#class='ism-table']")[0]:
for row in titles.xpath('./tr'):
tab_d = row.xpath('./td/text()')
writer.writerow(tab_d)
Or, perhaps this:
table = tree.xpath("//table[#class='ism-table']")[0]
for row in table.xpath('.//tr'):
items = row.xpath('./td/text()')
writer.writerow(items)
Or you could have the first XPath expression find the rows for you:
rows = tree.xpath("(.//table[#class='ism-table'])[1]//tr")
for row in rows:
items = row.xpath('./td/text()')
writer.writerow(items)
I parse a webpage using beautifulsoup:
import requests
from bs4 import BeautifulSoup
page = requests.get("webpage url")
soup = BeautifulSoup(page.content, 'html.parser')
I find the table and print the text
Ear_yield= soup.find(text="Earnings Yield").parent
print(Ear_yield.parent.text)
And then I get the output of a single row in a table
Earnings Yield
0.01
-0.59
-0.33
-1.23
-0.11
I would like this output to be stored in a list so that I can print on xls and operate on the elements (For ex if (Earnings Yield [0] > Earnings Yield [1]).
So I write:
import html2text
text1 = Ear_yield.parent.text
Ear_yield_text = html2text.html2text(pr1)
list_Ear_yield = []
for i in Ear_yield_text :
list_Ear_yield.append(i)
Thinking that my web data has gone into list. I print the fourth item and check:
print(list_Ear_yield[3])
I expect the output as -0.33 but i get
n
That means the list takes in individual characters and not the full word:
Please let me know where I am doing wrong
That is because your Ear_yield_text is a string rather than a list. Assuming that the text have new lines you can do directly this:
list_Ear_yield = Ear_yield_text.split('\n')
Now if you print list_Ear_yield you will be given this result
['Earnings Yield', '0.01', '-0.59', '-0.33', '-1.23', '-0.11']