I try to press the Replay button at Spotify Web Player with Python, but get this error. How can I press buttons in a webplayer?
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
replay.click()
Error:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
TypeError: 'WebElement' object is not subscriptable
This error message...
TypeError 'WebElement' object is not subscriptable
...implies that you have attached an index to a WebElement which is not supported.
Analysis
Only list elements can be indexed. However, in this line of code:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""") would always return a single WebElement. Hence you can't access the element through any index, e.g. [0], [1], etc as an index can be associated only with a list.
Solution
There are two approaches to solve the issue.
In the first approach, you can remove the index, i.e. [0], and in that case replay will be assigned with the first matched element identified through locator strategy as follows:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")
In the other approach, instead of using find_element_by_xpath() you can create a list using find_elements_by_xpath() and access the very first element from the List using the index [0] as follows:
replay = driver.find_elements_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
Reference
You can find a couple of relevant discussions in:
Exception has occurred: TypeError 'WebElement' object is not subscriptable
find_element_by_xpath
returns first found element (not array)
find_element_by_xpath(...).click()
or
find_elements_by_xpath(...)[0].click()
For everybody, who get this error: you might have confused driver.find_element() with driver.find_elements() and you are trying to get the singular item from WebElement object, not from a list Check it in your code
driver.find_elements() returns a list of appropriate WebElements
There is also another function - driver.find_element() and you can say that:
driver.find_element() = driver.find_elements()[0]
In this example Python is trying to get the first item from driver.find_element() object, which returns a WebElement object, not a list
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button""")[0]
As #KunduK commented remove the [0].
You are using absolute xPath this is not recommended.
Try using relative xPath...
If there are a few buttons and you need the first use the [0] in the xpath like so:
replay = driver.find_element_by_xpath("""/html/body/div[2]/div/div[4]/div[3]/footer/div/div[2]/div/div[1]/div[5]/button[0]""")
replay.click()
Related
I have this code that if the element exists, it will print the innerHTML value:
def display_hotel(self):
for hotel in self.hotel_data:
if hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]'):
hotel_original_price = hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]')
hotel_original_price = hotel_original_price.get_attribute('innerHTML').strip().replace(' ', '')
print(f"Original:\t\t\t{hotel_original_price}")
When I proceed and run the program, I get an error of
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"span[class="_a11e76d75 _6b0bd403c"]"}
I was hoping that if the element span[class="_a11e76d75 _6b0bd403c"] does not exist, it should just skip all together, why is it still trying to continue to do the code even under an if block? Am I missing anything here?
In case the element is missing selenium driver throws an exception.
In order to make your code working you should use find_elements method.
It returns a list of elements matching the passed locator.
So, in case there are matches the list will contain web elements while in case there will be no matches it will return an empty list while python see non-empty list as a Boolean True and empty list is a Boolean False.
So your code could be as following:
def display_hotel(self):
for hotel in self.hotel_data:
if hotel.find_elements(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]'):
hotel_original_price = hotel.find_element(By.CSS_SELECTOR, 'span[class="_a11e76d75 _6b0bd403c"]')
hotel_original_price = hotel_original_price.get_attribute('innerHTML').strip().replace(' ', '')
print(f"Original:\t\t\t{hotel_original_price}")
My code accesses a light sensor via a python request:
address = 'https://api.particle.io/v1/devices/my_device_id/analogvalue'
headers = {'Authorization':'Bearer {0}'.format(access_token)}
vals = requests.get(address, headers=headers)
The code returns the following values:
{"cmd":"VarReturn","name":"analogvalue","result":171,"coreInfo":{"last_app":"","last_heard":"2019-06-13T21:55:57.387Z","connected":true,"last_handshake_at":"2019-06-13T20:51:02.691Z","deviceID":"my_device_id","product_id":6}}
Python tells me that this is a 'requests.models.Response' class and not a dictionary like I thought.
When I try to access the 'result' value, I get error messages. Here are the various ways I have tried along with their error messages.
print(vals[2])
TypeError: 'Response' object does not support indexing
print(vals['result'])
TypeError: 'Response' object is not subscriptable
print(vals[2].json())
TypeError: 'Response' object does not support indexing
print(vals['result'].json())
TypeError: 'Response' object is not subscriptable
I got the last two approaches (.json) from a answer here on stack overflow.
Can anyone tell me how to access this result value or am I going to be forced to use regular expression?
EDIT: With help from Sebastien D I added the following and was able to get the result I was looking for.
import json
new_vals = json.loads(vals.content)
print(new_vals['result'])
Just do :
import json
### your code ###
json.loads(vals.content)
I am made this simple whatsapp bot using python and selenium.
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://web.whatsapp.com/')
target = "Someone"
msg = "Something"
input('Press Enter after scanning QR Code...')
TargetXML = driver.find_element_by_xpath('//span[#title = "
{}"]'.format(target))
TargetXML.click()
MsgBox = driver.find_elements_by_class_name('_1Plpp')
MsgBox[0].send_keys(msg)
SendButton = driver.find_elements_by_class_name('_35EW6')
SendButton[0].click()
At first run, I had MsgBox.send_keys(msg) and SendButton.click() instead of what you see in the script which gave an error AttributeError: 'list' object has no attribute 'send_keys' and AttributeError: 'list' object has no attribute 'click'.
I changed them to index 0 which solved the error and the script worked perfectly fine but I couldn't really understand why it worked with the element in 0th index so I tried to print the element and got the output as <selenium.webdriver.remote.webelement.WebElement (session="bd86fe53729956ba1fc3b16cf841a1a8", element="0.5125252686493715-2")> I am still not convinced with it and have that question in mind. Any help would be appreciated! Thank you!
The method 'find_elements_by_class_name' returns a list of elements that satisfy the class name in the parameter. This list consists of WebElement when you select the 0th element, you get the object of WebElement, over which you can apply the methods send_keys() and click().
For more information on Selenium and WebElement object refer to this documentation.
I have been trying to delete the first instance of an element using BeautifulSoup and I am sure I am missing something. I did not use find all since I need to target the first instance which is always a header(div) and has the class HubHeader. The class is used in other places in combination with a div tag. Unfortunately I can't change the setup of the base html.
I did also try select one outside of a loop and it still did not work.
def delete_header(filename):
html_docs = open(filename,'r')
soup = BeautifulSoup( html_docs, "html.parser")
print (soup.select_one(".HubHeader")) #testing
for div in soup.select_one(".HubHeader"):
div.decompose()
print (soup.select_one(".HubHeader")) #testing
html_docs.close()
delete_header("my_file")
The most recent error is this:
AttributeError: 'NavigableString' object has no attribute 'decompose'
I am using select_one() and decompose().
Short answer, replace,
for div in soup.select_one(".HubHeader"):
div.decompose()
With one line:
soup.select_one(".HubHeader").decompose()
Longer answer, you code iterates over a bs4.element.Tag object. The function .select_one() returns an object while .select() returns a list if you were using .select() your code would work but take out all occurrences of the element with the selected class.
I am searching elements in my list(one by one) by inputing into searchbar of a website and get apple products name that appeared in search result and printed. However I am getting following exception
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
I know its because of changing of element very fast so I need to add wait like
wait(driver, 10).until(EC.visibility_of_element_located((By.ID, "submitbutton")))
or explictly
Q1. But I don't understand where should I add it? Here is my code. Please help!
Q2. I want to go to all the next pages using but that's not working.
driver.find_element_by_xpath( '//div[#class="no-hover"]/a' ).click()
Earlier exception was raised on submitton button and now at if statement.
That's not what implicit wait is for. Since the page change regularly you can't be sure when the current object inside the variable is still valid.
My suggestion is to run the above code in loop using try except. Something like the following:
for element in mylist:
ok = False
while True:
try:
do_something_useful_while_the_page_can_change(element)
except StaleElementReferenceException:
# retry
continue
else:
# go to next element
break
Where:
def do_something_useful_while_the_page_can_change(element):
searchElement = driver.find_element_by_id("searchbar")
searchElement.send_keys(element)
driver.find_element_by_id("searchbutton").click()
items_count = 0
items = driver.find_elements_by_class_name( 'searchresult' )
for i, item in enumerate( items ):
if 'apple' in item.text:
print ('item.text')
items_count += len( items )
I think what you had was doing too much and can be simplified. You basically need to loop through a list of search terms, myList. Inside that loop you send the search term to the searchbox and click search. Still inside that loop you want to grab all the elements off the page that consist of search results, class='search-result-product-url' but also the text of the element contains 'apple'. The XPath locator I provided should do both so that the collection that is returned all are ones you want to print... so print each. End loop... back to next search term.
for element in mylist:
driver.find_element_by_id("search-input").send_keys(element)
driver.find_element_by_id("button-search").click()
# may need a wait here?
for item in driver.find_elements_by_xpath( "//a[#class='search-result-product-url'][contains(., 'apple')]" ):
print item.text