I am made this simple whatsapp bot using python and selenium.
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://web.whatsapp.com/')
target = "Someone"
msg = "Something"
input('Press Enter after scanning QR Code...')
TargetXML = driver.find_element_by_xpath('//span[#title = "
{}"]'.format(target))
TargetXML.click()
MsgBox = driver.find_elements_by_class_name('_1Plpp')
MsgBox[0].send_keys(msg)
SendButton = driver.find_elements_by_class_name('_35EW6')
SendButton[0].click()
At first run, I had MsgBox.send_keys(msg) and SendButton.click() instead of what you see in the script which gave an error AttributeError: 'list' object has no attribute 'send_keys' and AttributeError: 'list' object has no attribute 'click'.
I changed them to index 0 which solved the error and the script worked perfectly fine but I couldn't really understand why it worked with the element in 0th index so I tried to print the element and got the output as <selenium.webdriver.remote.webelement.WebElement (session="bd86fe53729956ba1fc3b16cf841a1a8", element="0.5125252686493715-2")> I am still not convinced with it and have that question in mind. Any help would be appreciated! Thank you!
The method 'find_elements_by_class_name' returns a list of elements that satisfy the class name in the parameter. This list consists of WebElement when you select the 0th element, you get the object of WebElement, over which you can apply the methods send_keys() and click().
For more information on Selenium and WebElement object refer to this documentation.
Related
I try to press the Replay button at Spotify Web Player with Python, but get this error. How can I press buttons in a webplayer?
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
replay.click()
Error:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
TypeError: 'WebElement' object is not subscriptable
This error message...
TypeError 'WebElement' object is not subscriptable
...implies that you have attached an index to a WebElement which is not supported.
Analysis
Only list elements can be indexed. However, in this line of code:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""") would always return a single WebElement. Hence you can't access the element through any index, e.g. [0], [1], etc as an index can be associated only with a list.
Solution
There are two approaches to solve the issue.
In the first approach, you can remove the index, i.e. [0], and in that case replay will be assigned with the first matched element identified through locator strategy as follows:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")
In the other approach, instead of using find_element_by_xpath() you can create a list using find_elements_by_xpath() and access the very first element from the List using the index [0] as follows:
replay = driver.find_elements_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
Reference
You can find a couple of relevant discussions in:
Exception has occurred: TypeError 'WebElement' object is not subscriptable
find_element_by_xpath
returns first found element (not array)
find_element_by_xpath(...).click()
or
find_elements_by_xpath(...)[0].click()
For everybody, who get this error: you might have confused driver.find_element() with driver.find_elements() and you are trying to get the singular item from WebElement object, not from a list Check it in your code
driver.find_elements() returns a list of appropriate WebElements
There is also another function - driver.find_element() and you can say that:
driver.find_element() = driver.find_elements()[0]
In this example Python is trying to get the first item from driver.find_element() object, which returns a WebElement object, not a list
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
As #KunduK commented remove the [0].
You are using absolute xPath this is not recommended.
Try using relative xPath...
If there are a few buttons and you need the first use the [0] in the xpath like so:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button[0]""")
replay.click()
I'm trying to use selenium to webscrape the webull screener. However, when I print the xpath to the webelement it just comes up with <selenium.webdriver.remote.webelement.WebElement (session="d9d2a95d8fde18ce1e914b8ca867a370", element="869bb784-0838-498e-8420-f56e67f42e68")>
and if I try executing it with execute_script it says it must be a str.
Here's a copy of my code
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(options=options)
driver.get('https://app.webull.com/screener')
driver.implicitly_wait(15)
peice =
driver.find_element_by_xpath('/html/body/div/div/div[3]/div/div[1]/div/div[2]/div/div[2]/div/div/div/div[3]/table/tbody/tr[2]/td[2]')
print(driver.execute_script(peice))
All I'm trying to do is print the actual symbol in the web screener.
Thank you
peice , contains a webelement object so if you print it you will get object representation printed
use peice.text or peice.get_attribute("textContent")
this will return the text from the element and prints it.
if you simply pass web element to execute_script you are not executing anything
you should execute something and return something eg:
print(driver.execute_script("return arguments[0].text()" , peice)
I am trying web scraping using Python selenium. I am getting the following error: Message: element not interactable (Session info: chrome=78.0.3904.108) I am trying to access option element by id, value or text. It is giving me this error. I am using Python-3. Can someone help to explain where am I going wrong. I used the select tag using xpath and also tried css_selector. The select tag is selected and I can get the output of select tag selected. Here is my code for a better understanding:
Code-1:-
path = r'D:\chromedriver_win32\chromedriver.exe'
browser = webdriver.Chrome(executable_path = path)
website = browser.get("https://publicplansdata.org/resources/download-avs-cafrs/")
el = browser.find_element_by_xpath('//*[#id="ppd-download-state"]/select')
for option in el.find_elements_by_tag_name('option'):
if option.text != None:
option.click()
break
Blockquote
Code-2:-
select_element = Select(browser.find_element_by_xpath('//*[#id="ppd-download-state"]/select'))
# this will print out strings available for selection on select_element, used in visible text below
print(o.value for o in select_element.options)
select_element.select_by_value('AK')
Both codes give the same error how can I select values from drop down from website
Same as question:
Python selenium select element from drop down. Element Not Visible Exception
But the error is different. Tried the methods in comments
State, Plan, and Year:
browser.find_element_by_xpath("//span[text()='State']").click()
browser.find_element_by_xpath("//a[text()='West Virginia']").click()
time.sleep(2) # wait for Plan list to be populated
browser.find_element_by_xpath("//span[text()='Plan']").click()
browser.find_element_by_xpath("//a[text()='West Virginia PERS']").click()
time.sleep(2) # wait for Year list to be populated
browser.find_element_by_xpath("//span[text()='Year']").click()
browser.find_element_by_xpath("//a[text()='2007']").click()
Don't forget to import time
Hi I am scraping a website and I am using selenium python. The code is as below
url ='https://www.hkexnews.hk/'
options = webdriver.ChromeOptions()
browser = webdriver.Chrome(chrome_options=options, executable_path=r'chromedriver.exe')
browser.get(url)
tier1 = browser.find_element_by_id('tier1-select')
tier1.click()
tier12 = browser.find_element_by_xpath('//*[#data-value="rbAfter2006"]')
tier12.click()
time.sleep(1)
tier2 = browser.find_element_by_id('rbAfter2006')
tier2.click()
tier22 = browser.find_element_by_xpath("//*[#id='rbAfter2006']//*[#class='droplist-item droplist-item-level-1']//*[text()='Circulars']")
tier22.click()
tier23 = browser.find_element_by_xpath("//*[#id='rbAfter2006']//*[#class='droplist-item droplist-item-level-2']//*[text()='Securities/Share Capital']")
tier23.click()
tier24 = browser.find_element_by_xpath("//*[#id='rbAfter2006']//*[#class='droplist-group droplist-submenu level3']//*[text()='Issue of Shares']")
tier24.click()
It works stops at tier23 by showing ElementNoVisibleException. I have tried with different class, yet it seems like not working. Thank you for you help
There are two elements that can be selected by your XPath. The first one is hidden. Try below to select required element:
tier23 = browser.find_element_by_xpath("//*[#id='rbAfter2006']//li[#aria-expanded='true']//*[#class='droplist-item droplist-item-level-2']//*[text()='Securities/Share Capital']")
or shorter
tier23 = browser.find_element_by_xpath("//li[#aria-expanded='true']//a[.='Securities/Share Capital']")
tier23.location_once_scrolled_into_view
tier23.click()
P.S. Note that that option will still be not visible because you need to scroll list down first. I used tier23.location_once_scrolled_into_view for this purpose
Also it's better to use Selenium built-in Waits instead of time.sleep
I have been trying to delete the first instance of an element using BeautifulSoup and I am sure I am missing something. I did not use find all since I need to target the first instance which is always a header(div) and has the class HubHeader. The class is used in other places in combination with a div tag. Unfortunately I can't change the setup of the base html.
I did also try select one outside of a loop and it still did not work.
def delete_header(filename):
html_docs = open(filename,'r')
soup = BeautifulSoup( html_docs, "html.parser")
print (soup.select_one(".HubHeader")) #testing
for div in soup.select_one(".HubHeader"):
div.decompose()
print (soup.select_one(".HubHeader")) #testing
html_docs.close()
delete_header("my_file")
The most recent error is this:
AttributeError: 'NavigableString' object has no attribute 'decompose'
I am using select_one() and decompose().
Short answer, replace,
for div in soup.select_one(".HubHeader"):
div.decompose()
With one line:
soup.select_one(".HubHeader").decompose()
Longer answer, you code iterates over a bs4.element.Tag object. The function .select_one() returns an object while .select() returns a list if you were using .select() your code would work but take out all occurrences of the element with the selected class.