Find if text exist inside a nested Div, if yes print out the whole string, Selenium Python - python-3.x

i'm very new to selenium(3.141.0) and python3, and i got a problem that couldn't figure it out.
The html looks similar to this
<div class='a'>
<div>
<p><b>ABC</b></p>
<p><b>ABC#123</b></p>
<p><b>XYZ</b></p>
<div>
</div>
I want selenium to find if # exist inside that div, (can not target the paragraph only element because sometime the text i want to extract is inside different element BUT it's always inside that <div class='a'>) If # exist => print the whole <p><b>ABC#123</b></p> (or sometime <div>ABC#123<div> )

To find an element with contained text, you must use an XPath. From what you are describing, it looks like you want the locator
//div[#class='a']//*[contains(text(),'#')]
^ a DIV with class 'a'
^ that has a descendant element that contains the text '#' within itself or a descendant
The code would look something like
for e in driver.find_elements(By.XPATH, "//div[#class='a']//*[contains(text(),'#')]"):
print(e.get_attribute('outerHTML')
and it will print all instances of <b>ABC#123</b>, <div>ABC#123</div>, or <p>ABC#123</p>, whichever exists

Related

Scrapy parse is returning an empty array, regardles of yield

I am brand new to Scrapy, and I could use a hint here. I realize that there are quite a few similar questions, but none of them seem to fix my problem. I have the following code written for a simple web scraper:
import scrapy
from ScriptScraper.items import ScriptItem
class ScriptScraper(scrapy.Spider):
name = "script_scraper"
allowed_domains = ["https://proplay.ws"]
start_urls = ["https://proplay.ws/dramas/"]
def parse(self, response):
for column in response.xpath('//div[#class="content-column one_fourth"]'):
text = column.xpath('//p/b/text()').extract()
item = ScriptItem()
item['url'] = "test"
item['title'] = text
yield item
I will want to do some more involved scraping later, but right now, I'm just trying to get the scraper to return anything at all. The HTML for the site I'm trying to scrape looks like this:
<div class="content-column one_fourth">
::before
<p>
<b>
All dramas
<br>
(in alphabetical
<br>
order):
</b>
</p>
...
</div>
and I am running the following command in the Terminal:
scrapy parse --spider=script_scraper -c parse_ITEM -d 2 https://proplay.ws/dramas/
According to my understanding of Scrapy, the code I have written should be yielding the text "All dramas"; however, it is yielding an empty array instead. Can anyone give me a hint as to why this is not producing the expected yield? Again, I apologize for the repetitive question.
your XPath expressions are not exactly as you want to extract data. If you want the first column's first-row item. Then your XPath expression should be.
item = {}
item['text'] = response.xpath ('//div[#class="content-column one_fourth"][1]/p[1]/b/text()').extract()[0].
The function extract() will return all the matches for the expression, it returns an array. If you want the first you should use extract()[0] or extract_first().
Go through this page https://devhints.io/xpath to get more knowledge related to Xpath.

How to use siblings() and closest() in Geb test

I am trying to write a script to click an icon which is a part of the table header. Each column in the table has this icon in it (ascending order and descending order sorting icons). I am using Geb to do this. Here is how I am trying to do it:
In my SortingSpec.groovy file:
header.closest("div.customSortDownLabel").click()
I also tried
header.siblings('div.customSortDownLabel').first().click()
In the SortingPage.groovy file:
header {
grid.$(class: 'div.customHeaderLabel', text: 'Country')
}
In my html:
<div>
<div class="customHeaderLabel">{{params.displayName}}</div>
<div *ngIf="params.enableSorting" (click)="onSortRequested('asc', $event)" [ngClass]="ascSort" class="customSortDownLabel">
<i class="fa fa-long-arrow-alt-down"></i></div>
<div *ngIf="params.enableSorting" (click)="onSortRequested('desc', $event)" [ngClass]="descSort" class="customSortUpLabel">
</div>
None of them worked for me. It is not able to find the selector. Any suggestions are appreciated.
Error I see is:
geb.error.RequiredPageContentNotPresent: The required page content 'header - SimplePageContent (owner: SortingGrid, args: [], value: null)' is not present
That error looks like header isn't matching. Assuming that grid matches, and you're using some Javascript framework like Angular to substitute 'Country' for params.displayName, I would guess that maybe Geb is failing to find header before 'Country' is substituted. So, I would try making header wait for it:
header(wait: true) { grid.$(class: 'div.customHeaderLabel', text: 'Country') }
By the way, closest() goes in the wrong direction, to an ancestor, but siblings() looks good.
siblings() didnt work for me but next() worked for me. next() grabs the next sibling elements of the current context elements.
Example:
1. header.next().click() clicks the very next sibling
header.next("div.customSortDownLabel").click() looks for the very next sibling with the matching selector of 'div.customSortDownLabel' and then clicks it.

Extracting div attributes based on text

I am using Python 3.6 with bs4 to implement this task.
my div tag looks like this
<div class="Portfolio" portfolio_no="345">VBHIKE324</div>
<div class="Portfolio" portfolio_no="567">SCHF54TYS</div>
I need to extract portfolio_no i.e 345. As it is a dynamic value it keeps changing for multiple div tags but whereas the text remains same.
for data in soup.find_all('div',class_='Portfolio', text='VBHIKE324'):
print (data)
It outputs as None, where as I'm looking for o/p like 345
Here you go
for data in soup.find_all('div', {'class':'Portfolio'}):
print(data['portfolio_no'])
If you want the portfolio_no for the one with text VBHIKE324 then you can do something like this
for data in soup.find_all('div', {'class':'Portfolio'}):
if data.text == 'VBHIKE324':
print(data['portfolio_no'])

Groovy XmlParser / XmlSlurper: node.localText() position?

I have a follow-up question for this question: Groovy XmlSlurper get value of the node without children.
It explains that in order to get the local inner text of a (HTML) node without recursively get the nested text of potential inner child nodes as well, one has to use #localText() instead of #text().
For instance, a slightly enhanced example from the original question:
<html>
<body>
<div>
Text I would like to get1.
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</div>
<span>
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</span>
</body>
</html>
with the solution applied:
def tagsoupParser = new org.ccil.cowan.tagsoup.Parser()
def slurper = new XmlSlurper(tagsoupParser)
def htmlParsed = slurper.parseText(stringToParse)
println htmlParsed.body.div[0].localText()[0]
would return:
[Text I would like to get1., Text I would like to get2., Text I would like to get3.]
However, when parsing the <span> part in this example
println htmlParsed.body.span[0].localText()
the output is
[Text I would like to get2., Text I would like to get3.]
The problem I am facing now is that it's apparently not possible to pinpoint the location ("between which child nodes") of the texts. I would have expected the second invocation to yield
[, Text I would like to get2., Text I would like to get3.]
This would have made it clear: Position 0 (before child 0) is empty, position 1 (between child 0 and 1) is "Text I would like to get2.", and position 2 (between child 1 and 2) is "Text I would like to get3." But given the API works as it does, there is apparently no way to determine whether the text returned at index 0 is actually positioned at index 0 or at any other index, and the same is true for all the other indices.
I have tried it with both XmlSlurper and XmlParser, yielding the same results.
If I'm not mistaken here, it's as a consequence also impossible to completely recreate an original HTML document using the information from the parser because this "text index" information is lost.
My question is: Is there any way to find out those text positions? An answer requiring me to change the parser would also be acceptable.
UPDATE / SOLUTION:
For further reference, here's Will P's answer, applied to the original code:
def tagsoupParser = new org.ccil.cowan.tagsoup.Parser()
def slurper = new XmlParser(tagsoupParser)
def htmlParsed = slurper.parseText(stringToParse)
println htmlParsed.body.div[0].children().collect {it in String ? it : null}
This yields:
[Text I would like to get1., null, Text I would like to get2., null, Text I would like to get3.]
One has to use XmlParser instead of XmlSlurper with node.children().
I don't know jsoup, and i hope it is not interfering with the solution, but with a pure XmlParser you can get an array of children() which contains the raw string:
html = '''<html>
<body>
<div>
Text I would like to get1.
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</div>
<span>
extra stuff
Text I would like to get2.
link to example
Text I would like to get3.
</span>
</body>
</html>'''
def root = new XmlParser().parseText html
root.body.div[0].children().with {
assert get(0).trim() == 'Text I would like to get1.'
assert get(0).getClass() == String
assert get(1).name() == 'a'
assert get(1).getClass() == Node
assert get(2) == '''
Text I would like to get2.
'''
}

Why does d3.select() return array of array?

I recently started using d3.js to write some scripts to manipulate SVGs. So most of the time I refer d3 documentation and find the solution. However I cannot understand why d3.select function return array of arrays. For example let's say i have an SVG element and if I do d3.select("svg"), it returns [[svg]] so I have to do d3.select("svg")[0]. The documentation says
One nuance is that selections are grouped: rather than a one-dimensional array, each
selection is an array of arrays of elements. This preserves the
hierarchical structure of subselections
Then says we can ignore it most of the time.
Why does it return array of array ?
What does
This preserves the hierarchical structure of subselections
mean?
Thanks in advance.
You shouldn't need to know or care how the object d3.select returns is structured internally. All you need to know is which methods are accessible in that object, which is what the documentation describes.
Say you have this document:
<div>
<span>1</span>
<span>2</span>
</div>
<div>
<span>3</span>
<span>4</span>
</div>
If you select all <div> elements with d3.selectAll
var div = d3.selectAll("div");
the div is a d3 selection object of size 2, one for each <div> element in the document.
But if you now generate a subselection from this selection object
var span = div.selectAll("span");
a search is made for matching elements within each element in the div selection, and the structure is preserved -- i.e., the span selection will contain the same number of elements as the div selection it was based on, and each of these will consist of a selection of elements found in that element.
So in this case, span will contain two selections (first <div> and second <div>), each of which will contain two elements(1 and 2 in the first, 3 and 4 in the second).
As for select, it is the same as selectAll except it stops after finding one match; its return is structured exactly the same way, however.
Demo

Resources