Extracting div attributes based on text - python-3.x

I am using Python 3.6 with bs4 to implement this task.
my div tag looks like this
<div class="Portfolio" portfolio_no="345">VBHIKE324</div>
<div class="Portfolio" portfolio_no="567">SCHF54TYS</div>
I need to extract portfolio_no i.e 345. As it is a dynamic value it keeps changing for multiple div tags but whereas the text remains same.
for data in soup.find_all('div',class_='Portfolio', text='VBHIKE324'):
print (data)
It outputs as None, where as I'm looking for o/p like 345

Here you go
for data in soup.find_all('div', {'class':'Portfolio'}):
print(data['portfolio_no'])
If you want the portfolio_no for the one with text VBHIKE324 then you can do something like this
for data in soup.find_all('div', {'class':'Portfolio'}):
if data.text == 'VBHIKE324':
print(data['portfolio_no'])

Related

Find if text exist inside a nested Div, if yes print out the whole string, Selenium Python

i'm very new to selenium(3.141.0) and python3, and i got a problem that couldn't figure it out.
The html looks similar to this
<div class='a'>
<div>
<p><b>ABC</b></p>
<p><b>ABC#123</b></p>
<p><b>XYZ</b></p>
<div>
</div>
I want selenium to find if # exist inside that div, (can not target the paragraph only element because sometime the text i want to extract is inside different element BUT it's always inside that <div class='a'>) If # exist => print the whole <p><b>ABC#123</b></p> (or sometime <div>ABC#123<div> )
To find an element with contained text, you must use an XPath. From what you are describing, it looks like you want the locator
//div[#class='a']//*[contains(text(),'#')]
^ a DIV with class 'a'
^ that has a descendant element that contains the text '#' within itself or a descendant
The code would look something like
for e in driver.find_elements(By.XPATH, "//div[#class='a']//*[contains(text(),'#')]"):
print(e.get_attribute('outerHTML')
and it will print all instances of <b>ABC#123</b>, <div>ABC#123</div>, or <p>ABC#123</p>, whichever exists

Scrapy parse is returning an empty array, regardles of yield

I am brand new to Scrapy, and I could use a hint here. I realize that there are quite a few similar questions, but none of them seem to fix my problem. I have the following code written for a simple web scraper:
import scrapy
from ScriptScraper.items import ScriptItem
class ScriptScraper(scrapy.Spider):
name = "script_scraper"
allowed_domains = ["https://proplay.ws"]
start_urls = ["https://proplay.ws/dramas/"]
def parse(self, response):
for column in response.xpath('//div[#class="content-column one_fourth"]'):
text = column.xpath('//p/b/text()').extract()
item = ScriptItem()
item['url'] = "test"
item['title'] = text
yield item
I will want to do some more involved scraping later, but right now, I'm just trying to get the scraper to return anything at all. The HTML for the site I'm trying to scrape looks like this:
<div class="content-column one_fourth">
::before
<p>
<b>
All dramas
<br>
(in alphabetical
<br>
order):
</b>
</p>
...
</div>
and I am running the following command in the Terminal:
scrapy parse --spider=script_scraper -c parse_ITEM -d 2 https://proplay.ws/dramas/
According to my understanding of Scrapy, the code I have written should be yielding the text "All dramas"; however, it is yielding an empty array instead. Can anyone give me a hint as to why this is not producing the expected yield? Again, I apologize for the repetitive question.
your XPath expressions are not exactly as you want to extract data. If you want the first column's first-row item. Then your XPath expression should be.
item = {}
item['text'] = response.xpath ('//div[#class="content-column one_fourth"][1]/p[1]/b/text()').extract()[0].
The function extract() will return all the matches for the expression, it returns an array. If you want the first you should use extract()[0] or extract_first().
Go through this page https://devhints.io/xpath to get more knowledge related to Xpath.

How to parse the only the second span tag in an HTML document using python bs4

I want to parse only one span tag in my html document. There are three sibling span tags without any class or I'd. I am targeting the second one only using BeautifulSoup 4.
Given the following html document:
<div class="adress">
<span>35456 street</span>
<span>city, state</span>
<span>zipcode</span>
</div>
I tried:
for spn in soup.findAll('span'):
data = spn[1].text
but it didn't work. The expected result is the text in the second span stored in a a variable:
data = "city, state"
and how to to get both the first and second span concatenated in one variable.
You are trying to slice an individual span (a Tag instance). Get rid of the for loop and slice the findAll response instead, i.e.
>>> soup.findAll('span')[1]
<span>city, state</span>
You can get the first and second tags together using:
>>> soup.findAll('span')[:2]
[<span>35456 street</span>, <span>city, state</span>]
or, as a string:
>>> "".join([str(tag) for tag in soup.findAll('span')[:2]])
'<span>35456 street</span><span>city, state</span>'
Another option:
data = soup.select_one('div > span:nth-of-type(2)').get_text(strip=True)
print(data)
Output:
city, state

Querying <div class="name"> in Python

I am trying to follow the guide posted here: https://medium.freecodecamp.org/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe
I am at this point, where I am supposed to get the name of presumably the stock.
Take out the div of name and get its value
name_box = soup.find(‘h1’, attrs={‘class’: ‘name’})
I suspect I will also have trouble when querying the price. Do I have to replace 'price' with 'priceText__1853e8a5' as found in the html?
get the index price
price_box = soup.find(‘div’, attrs={‘class’:’price’})
Thanks, this would be a massive help.
If you replace price with priceText__1853e8a5 you will get your result, but I suspect that the class name changes dynamically/is dynamically generated (note the number at the end). So to get your result you need something more robust.
You can target tags in BeautifulSoups with CSS selectors (with select()/select_one() methods. This example will target all <span> tags with class attribute that begins with priceText (^= operator - more info about CSS selectors here).
from bs4 import BeautifulSoup
import requests
r = requests.get('https://www.bloomberg.com/quote/SPX:IND')
soup = BeautifulSoup(r.text, 'lxml')
print(soup.select_one('span[class^="priceText"]').text)
This prints:
2,813.36
You have several options to do that.
getting the value by appropriate xPath.
//span[contains(#class, 'priceText__')]
Writing regex to find the exact element.
price_tag = soup.find_all('span', {'class':
re.compile(r'priceText__.*?')})
I am not sure with the regex pattern as i am bad in it. Edits are welcome.

Groovy XmlParser / XmlSlurper: node.localText() position?

I have a follow-up question for this question: Groovy XmlSlurper get value of the node without children.
It explains that in order to get the local inner text of a (HTML) node without recursively get the nested text of potential inner child nodes as well, one has to use #localText() instead of #text().
For instance, a slightly enhanced example from the original question:
<html>
<body>
<div>
Text I would like to get1.
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</div>
<span>
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</span>
</body>
</html>
with the solution applied:
def tagsoupParser = new org.ccil.cowan.tagsoup.Parser()
def slurper = new XmlSlurper(tagsoupParser)
def htmlParsed = slurper.parseText(stringToParse)
println htmlParsed.body.div[0].localText()[0]
would return:
[Text I would like to get1., Text I would like to get2., Text I would like to get3.]
However, when parsing the <span> part in this example
println htmlParsed.body.span[0].localText()
the output is
[Text I would like to get2., Text I would like to get3.]
The problem I am facing now is that it's apparently not possible to pinpoint the location ("between which child nodes") of the texts. I would have expected the second invocation to yield
[, Text I would like to get2., Text I would like to get3.]
This would have made it clear: Position 0 (before child 0) is empty, position 1 (between child 0 and 1) is "Text I would like to get2.", and position 2 (between child 1 and 2) is "Text I would like to get3." But given the API works as it does, there is apparently no way to determine whether the text returned at index 0 is actually positioned at index 0 or at any other index, and the same is true for all the other indices.
I have tried it with both XmlSlurper and XmlParser, yielding the same results.
If I'm not mistaken here, it's as a consequence also impossible to completely recreate an original HTML document using the information from the parser because this "text index" information is lost.
My question is: Is there any way to find out those text positions? An answer requiring me to change the parser would also be acceptable.
UPDATE / SOLUTION:
For further reference, here's Will P's answer, applied to the original code:
def tagsoupParser = new org.ccil.cowan.tagsoup.Parser()
def slurper = new XmlParser(tagsoupParser)
def htmlParsed = slurper.parseText(stringToParse)
println htmlParsed.body.div[0].children().collect {it in String ? it : null}
This yields:
[Text I would like to get1., null, Text I would like to get2., null, Text I would like to get3.]
One has to use XmlParser instead of XmlSlurper with node.children().
I don't know jsoup, and i hope it is not interfering with the solution, but with a pure XmlParser you can get an array of children() which contains the raw string:
html = '''<html>
<body>
<div>
Text I would like to get1.
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</div>
<span>
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</span>
</body>
</html>'''
def root = new XmlParser().parseText html
root.body.div[0].children().with {
assert get(0).trim() == 'Text I would like to get1.'
assert get(0).getClass() == String
assert get(1).name() == 'a'
assert get(1).getClass() == Node
assert get(2) == '''
Text I would like to get2.
'''
}

Resources