Scrape Product Image with BeautifulSoup (Error) - python-3.x

I need your help. I'm working on a telegram bot which sends me all the sales from amazon.
It works well but this function doesn't work properly. I have always the same error that, however, blocks the script
imgs_str = img_div.img.get('data-a-dynamic-image') # a string in Json format
AttributeError: 'NoneType' object has no attribute 'img'
def take_image(soup):
img_div = soup.find(id="imgTagWrapperId")
imgs_str = img_div.img.get('data-a-dynamic-image') # a string in Json format
# convert to a dictionary
imgs_dict = json.loads(imgs_str)
#each key in the dictionary is a link of an image, and the value shows the size (print all the dictionay to inspect)
num_element = 0
first_link = list(imgs_dict.keys())[num_element]
return first_link
I still don't understand how to solve this issue.
Thanks for All!

From the looks of the error, soup.find didn't work.
Have you tried using images = soup.findAll("img",{"id":"imgTagWrapperId"})
This will return a list

Images are not inserted in HTML Page they are linked to it so you need wait until uploaded. Here i will give you two options;
1-) (not recommend cause there may be a margin of error) simply; you can wait until the image is loaded(for this you can use "time.sleep()"
2-)(recommend) I would rather use Selenium Web Driver. You also have to wait when you use selenium, but the good thing is that selenium has a unique function for this job.
I will show how make it with selenium;
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
browser = webdriver.Chrome()
browser.get("url")
delay = 3 # seconds
try:
myElem = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.ID, 'imgTagWrapperId')))# I used what do you want find
print ("Page is ready!")
except TimeoutException:
print ("Loading took too much time!")
More Documention
Code example for way 1
Q/A for way 2

Related

Scrape Google Maps results Website URL selenium

I am trying to search with python via Google Maps and I want to get the URL from the results.
Following steps I approach:
open google
Accept cookies
Search for random thing (in this example "pediatrician in Aargau")
switch to google maps
This is where I get the error, as I am trying to wait for the results to load, but I always get a timeout. I can see in the window that opens, that the results are fully loaded.
Is there anything wrong with my code? I would like to extract the website URL of the results.
Here is the code that I have so far:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Start the browser
driver = webdriver.Chrome()
# Open google.de and accept cookies
driver.get("https://www.google.de/")
wait = WebDriverWait(driver, 25)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#L2AGLb > div"))).click()
# Search for "Kinderarzt Kanton Aargau"
search_box = driver.find_element(By.NAME, "q")
search_box.send_keys("Kinderarzt Kanton Aargau")
search_box.submit()
# Switch to Maps tab
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[contains(text(), 'Maps')]"))).click()
# Wait for links and extract
results = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div[aria-label^='Results for'] > div > div > a")))
for result in results:
link = result.get_attribute("href")
print(link)
# Close the browser
driver.quit()
PS: I have tried to increase the time for the webdriver, but that won't help. I think it can not find the object and there must be another way to identify the objects.
First, you can skip several steps by just building the URL for google maps with the desired search string. Second, your "Wait for results to load" locator was not on my page. My guess is that the class you are using is changing regularly. I used a different CSS selector and found it just fine.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Start the browser
driver = webdriver.Chrome()
# Declare string to search for and encode it
search_string = "Kinderarzt Kanton Aargau"
partial_url = search_string.replace(" ", "+")
# Open google.de and accept cookies
driver.get(f"https://www.google.de/maps/search/{partial_url}/")
wait = WebDriverWait(driver, 25)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#L2AGLb > div"))).click()
# Wait for links and extract
results = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div[aria-label^='Results for'] > div > div > a")))
for result in results:
link = result.get_attribute("href")
print(link)
# Close the browser
driver.quit()
The result is
https://www.google.de/maps/place/Dr.+med.+Helena+Gerritsma+Schirlo/data=!4m7!3m6!1s0x47903be8d0d4a09d:0xc97d85a6fa076207!8m2!3d47.3906733!4d8.0443884!16s%2Fg%2F1tghc1gd!19sChIJnaDU0Og7kEcRB2IH-qaFfck?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Kinderarztpraxis+Dr.+med.+Armin+B%C3%BChler+%26+Thomas+Justen/data=!4m7!3m6!1s0x479069d7b30c674b:0xd04693e64cbc42b0!8m2!3d47.5804824!4d8.2163541!16s%2Fg%2F1ptw0srs4!19sChIJS2cMs9dpkEcRsEK8TOaTRtA?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Kinderarztpraxis+Lenzburg/data=!4m7!3m6!1s0x4790160e650976b1:0x5352d33510a53d99!8m2!3d47.3855278!4d8.1753395!16s%2Fg%2F11hz17jwcy!19sChIJsXYJZQ4WkEcRmT2lEDXTUlM?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Kinderarzthaus+-+Kinderarztpraxis/data=!4m7!3m6!1s0x47903bf002633251:0xf029086640b016ee!8m2!3d47.391928!4d8.051698!16s%2Fg%2F11cfdn2j8!19sChIJUTJjAvA7kEcR7hawQGYIKfA?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Dr.+med.+Nils+Hammerich/data=!4m7!3m6!1s0x4790160e650976b1:0x7116ed2cc14996ea!8m2!3d47.3856086!4d8.1753854!16s%2Fg%2F1tl0w7qv!19sChIJsXYJZQ4WkEcR6pZJwSztFnE?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Kinderarzt+Berikon/data=!4m7!3m6!1s0x47900e152314a493:0x72ca7fe58b7b3a5f!8m2!3d47.3612625!4d8.3674472!16s%2Fg%2F11c311g_px!19sChIJk6QUIxUOkEcRXzp7i-V_ynI?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Dr.+med.+Hana+Balent+Ilitsch/data=!4m7!3m6!1s0x4790697f95fe3a73:0xaff715a22ab56e78!8m2!3d47.5883105!4d8.2882387!16s%2Fg%2F11hyjwg_32!19sChIJczr-lX9pkEcReG61KqIV968?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Dr.+med.+Belzer+Heierling+Tanja/data=!4m7!3m6!1s0x47906d2a4e9698fd:0x6865ac23234b8dc9!8m2!3d47.4637622!4d8.3284463!16s%2Fg%2F1tksm8d9!19sChIJ_ZiWTiptkEcRyY1LIyOsZWg?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Praxis+f%C3%BCr+Kinder-+und+Jugendmedizin+Dr.+Dirk+Bock/data=!4m7!3m6!1s0x47906b5c9071d861:0x516c763f7642c9ff!8m2!3d47.4731839!4d8.1959905!16s%2Fg%2F11mpc9wm91!19sChIJYdhxkFxrkEcR_8lCdj92bFE?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Alleviamed+Kinderarztpraxis+Meisterschwanden/data=!4m7!3m6!1s0x4790193bdf03b5f1:0xfef98e265772814a!8m2!3d47.2956342!4d8.2279202!16s%2Fg%2F11gr2z_z2f!19sChIJ8bUD3zsZkEcRSoFyVyaO-f4?authuser=0&hl=en&rclk=1
https://www.google.de/maps/place/Kinderarztpraxis+Suhrepark+AG/data=!4m7!3m6!1s0x47903c69ae471281:0xcb34880030319dd7!8m2!3d47.3727496!4d8.0809937!16s%2Fg%2F1v3kl_4v!19sChIJgRJHrmk8kEcR150xMACINMs?authuser=0&hl=en&rclk=1

Selenium (Python) not finding dynamically loaded JavaScript table after automated login occurs

Im using Selenium with Python3 on a Service Now Website.
So the process is as follows: selenium loads up the ServiceNow URL and then I use sendKeys to automate typing in of username and password, then the page is loaded which has a table of incidents I need to extract. Unfortunately I have to login in every single time because of the group policy I have.
This works up until I have to find the dynamically rendered Javascript table with data and I can't for the life of me seem to find it. I even tried to put a sleep in there for 15 seconds to allow it to load.
I also double checked the XPaths and Id / Class names and they match up. When I print query.page_source I don't see anything rendered by JS.
I've used beautiful soup too but that also doesn't work.
Any ideas?
from time import sleep
from collections import deque
from selenium import webdriver
from selenium.webdriver.support.ui import Select # for <SELECT> HTML form
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
query = webdriver.Firefox()
get_query = query.get("SERVICENOW URL")
query.implicitly_wait(10)
login_username = query.find_element_by_id('username')
login_password = query.find_element_by_id('password')
login_button = query.find_element_by_id('signOnButton')
username = "myUsername"
password = "myPassword"
login_username.send_keys(username)
login_password.send_keys(password)
login_button.click()
sleep(10)
incidentTableData = []
print(query.page_source)
// *** THESE ALL FAIL AND RETURN NONE ***
print(query.find_elements())
tableById = query.find_element_by_id('service-now-table-id')
tableByXPath = query.find_element_by_xpath('service-now-xpath')
tableByClass = query.find_element_by_id('service-now-table-class')
Since it's a dynamically rendered Javascript table, I would suggest you to implement explicit wait in your code.
so instead of this :
tableById = query.find_element_by_id('service-now-table-id')
tableByXPath = query.find_element_by_xpath('service-now-xpath')
tableByClass = query.find_element_by_id('service-now-table-class')
re-write these lines like this :
wait = WebDriverWait(query, 10)
service_now_with_id = wait.until(EC.element_to_be_clickable((By.ID, "service-now-table-id")))
service_now_with_xpath = wait.until(EC.element_to_be_clickable((By.XPATH, "service-now-xpath")))
service_now_with_class = wait.until(EC.element_to_be_clickable((By.ID, "service-now-table-class")))
You are gonna need to use the below imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as E
PS :- service_now_with_id, service_now_with_xpath, service_now_with_class, these are web elements returned by explicit waits. you may wanna have to interact with them as per your requirement meaning, clicking on it or sending keys or whatever.

Clicking links on the website to get contents in the bubbles with selenium

I'm trying to get the course information on http://bulletin.iit.edu/graduate/colleges/science/applied-mathematics/master-data-science/#programrequirementstext.
In my code, I tried to first click on each course, next get the description in the bubble, and then close the bubble as it may overlay on top of other course links.
My problem is that I couldn't get the description in the bubble and some course links were still skipped though I tried to avoid it by closing the bubble.
Any idea about how to do this? Thanks in advance!
info = []
driver = webdriver.Chrome()
driver.get('http://bulletin.iit.edu/graduate/colleges/science/applied-mathematics/master-data-science/#programrequirementstext')
for i in range(1,3):
for j in range(2, 46):
try:
driver.find_element_by_xpath('//*[#id="programrequirementstextcontainer"]/table['+str(i)+']/tbody/tr['+str(j)+']/td[1]/a').click()
info.append(driver.find_elements_by_xpath('/html/body/div[8]/div[3]/div/div')[0].text)
driver.find_element_by_xpath('//*[#id="lfjsbubbleclose"]').click()
time.sleep(3)
except: pass
[1]: http://bulletin.iit.edu/graduate/colleges/science/applied-mathematics/master-data-science/#programrequirementstext
Not sure why you have put static range in for loop even though all the combinations of i and j index count in your xpath doesn't find any element on your application.
I would suggest better to go with finding all element on your webpage using single locator and loop trough to get descriptions from bubble.
Use below code:
course_list = driver.find_elements_by_css_selector("table.sc_courselist a.bubblelink.code")
wait = WebDriverWait(driver, 20)
for course in course_list:
try:
print("grabbing info of course : ", course.text)
course.click()
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.courseblockdesc")))
info.append(driver.find_element_by_css_selector('div.courseblockdesc>p').text)
wait.until(EC.visibility_of_element_located((By.ID, "lfjsbubbleclose")))
driver.find_element_by_id('lfjsbubbleclose').click()
except:
print("error while grabbing info")
print(info)
As it require some time to load the content in bubble so you should introduce explicit wait in your script until bubble content get completely visible and then grab it.
import below package for using wait in above code:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Please note, this code grab all the courses description from bubble. Let me know if you are looking for some specific not all.
To load the bubble, the website makes an ajax call.
import requests
from bs4 import BeautifulSoup
def course(course_code):
data = {"page":"getcourse.rjs","code":course_code}
res = requests.get("http://bulletin.iit.edu/ribbit/index.cgi", data=data)
soup = BeautifulSoup(res.text,"lxml")
result = {}
result["description"] = soup.find("div", class_="courseblockdesc").text.strip()
result["title"] = soup.find("div", class_="coursetitle").text.strip()
return result
Output for course("CS 522")
{'description': 'Continued exploration of data mining algorithms. More sophisticated algorithms such as support vector machines will be studied in detail. Students will continuously study new contributions to the field. A large project will be required that encourages students to push the limits of existing data mining techniques.',
'title': 'Advanced Data Mining'}```

How to get all comments in youtube with selenium?

The webpage shows that there are 702 Comments.
target youtube sample
I write a function get_total_youtube_comments(url) ,many codes copied from the project on github.
project on github
def get_total_youtube_comments(url):
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
options = webdriver.ChromeOptions()
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument("--headless")
driver = webdriver.Chrome(options=options,executable_path='/usr/bin/chromedriver')
wait = WebDriverWait(driver,60)
driver.get(url)
SCROLL_PAUSE_TIME = 2
CYCLES = 7
html = driver.find_element_by_tag_name('html')
html.send_keys(Keys.PAGE_DOWN)
html.send_keys(Keys.PAGE_DOWN)
time.sleep(SCROLL_PAUSE_TIME * 3)
for i in range(CYCLES):
html.send_keys(Keys.END)
time.sleep(SCROLL_PAUSE_TIME)
comment_elems = driver.find_elements_by_xpath('//*[#id="content-text"]')
all_comments = [elem.text for elem in comment_elems]
return all_comments
Try to parse all comments on a sample webpage https://www.youtube.com/watch?v=N0lxfilGfak.
url='https://www.youtube.com/watch?v=N0lxfilGfak'
list = get_total_youtube_comments(url)
It can get some comments ,only small party of all comments.
len(list)
60
60 is much less than 702,how to get all comments in youtube with selenium?
#supputuri,i can extract all comments with your code.
comments_list = driver.find_elements_by_xpath("//*[#id='content-text']")
len(comments_list)
709
print(driver.find_element_by_xpath("//h2[#id='count']").text)
717 Comments
comments_list[-1].text
'mistake at 23:11 \nin NOT it should return false if x is true.'
comments_list[0].text
'Got a question on the topic? Please share it in the comment section below and our experts will answer it for you. For Edureka Python Course curriculum, Visit our Website: Use code "YOUTUBE20" to get Flat 20% off on this training.'
Why the comments number is 709 instead of 717 shown in page?
You are getting a limited number of comments as YouTube will load the comments as you keep scrolling down. There are around 394 comments left on that video you have to first make sure all the comments are loaded and then also expand all View Replies so that you will reach the max comments count.
Note: I was able to get 700 comments using the below lines of code.
# get the last comment
lastEle = driver.find_element_by_xpath("(//*[#id='content-text'])[last()]")
# scroll to the last comment currently loaded
lastEle.location_once_scrolled_into_view
# wait until the comments loading is done
WebDriverWait(driver,30).until(EC.invisibility_of_element((By.CSS_SELECTOR,"div.active.style-scope.paper-spinner")))
# load all comments
while lastEle != driver.find_element_by_xpath("(//*[#id='content-text'])[last()]"):
lastEle = driver.find_element_by_xpath("(//*[#id='content-text'])[last()]")
driver.find_element_by_xpath("(//*[#id='content-text'])[last()]").location_once_scrolled_into_view
time.sleep(2)
WebDriverWait(driver,30).until(EC.invisibility_of_element((By.CSS_SELECTOR,"div.active.style-scope.paper-spinner")))
# open all replies
for reply in driver.find_elements_by_xpath("//*[#id='replies']//paper-button[#class='style-scope ytd-button-renderer'][contains(.,'View')]"):
reply.location_once_scrolled_into_view
driver.execute_script("arguments[0].click()",reply)
time.sleep(5)
WebDriverWait(driver, 30).until(
EC.invisibility_of_element((By.CSS_SELECTOR, "div.active.style-scope.paper-spinner")))
# print the total number of comments
print(len(driver.find_elements_by_xpath("//*[#id='content-text']")))
There are a couple of things:
The WebElements within the website https://www.youtube.com/ are dynamic. So are the comments dynamically rendered.
With in the webpage https://www.youtube.com/watch?v=N0lxfilGfak the comments doesn't render unless user scrolls the following element within the Viewport.
The comments are with in:
<!--css-build:shady-->
Which applies, Polymer CSS Builder is used apply Polymer's CSS Mixin shim and ShadyDOM scoping. So some runtime work is still done to convert CSS selectors under the default settings.
Considering the above mentioned factors here's a solution to retrieve all the comments:
Code Block:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException, NoSuchElementException, ElementClickInterceptedException, WebDriverException
import time
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://www.youtube.com/watch?v=N0lxfilGfak')
driver.execute_script("return scrollBy(0, 400);")
subscribe = WebDriverWait(driver, 60).until(EC.visibility_of_element_located((By.XPATH, "//yt-formatted-string[text()='Subscribe']")))
driver.execute_script("arguments[0].scrollIntoView(true);",subscribe)
comments = []
my_length = len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//yt-formatted-string[#class='style-scope ytd-comment-renderer' and #id='content-text'][#slot='content']"))))
while True:
try:
driver.execute_script("window.scrollBy(0,800)")
time.sleep(5)
comments.append([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//yt-formatted-string[#class='style-scope ytd-comment-renderer' and #id='content-text'][#slot='content']")))])
except TimeoutException:
driver.quit()
break
print(comment)
If you don't have to use Selenium I would recommend you to look at the google/youtube api.
https://developers.google.com/youtube/v3/getting-started
Example :
https://www.googleapis.com/youtube/v3/commentThreads?key=YourAPIKey&textFormat=plainText&part=snippet&videoId=N0lxfilGfak&maxResults=100
This would give you the first 100 results and gets you a token that you can append on the next request to get the next 100 results.
I'm not familiar with python, but I'll tell you the steps that I would do to get all comments.
First of all, if your code I think the main issue is with the
CYCLES = 7
According to this, you will be scrolling for 2 seconds 7 times. Since you are successfully grabbing 60 comments, fixing the above condition will solve your issue.
I assume you don't have any issue in finding elements on a website using locators.
You need to get the total comments to count to a variable as an int. (in your case, let's say it's COMMENTS = 715)
Define another variable called VISIBLECOUNTS = 0
The use a while loop to scroll if the COMMENTS > VISIBLECOUNTS
The code might look like this ( really sorry if there are syntax issues )
// python - selenium command to get all comments counts.
COMMENTS = 715
(715 is just a sample value, it will change upon the total comments count)
VISIBLECOUNTE = 0
SCROLL_PAUSE_TIME = 2
while VISIBLECOUNTS < COMMENTS :
html.send_keys(Keys.END)
time.sleep(SCROLL_PAUSE_TIME)
VISIBLECOUNTS = len(driver.find_elements_by_xpath('//ytm-comment-thread-renderer'))
With this, you will be scrolling down until the COMMENTS = VISIBLECOUNTS. Then you can grab all the comments as all of them share the same element attributes such as ytm-comment-thread-renderer
Since I'm not familiar with python I'll add the command to get the comments to count from js. you can try this on your browser and convert it into your python command
Run the bellow queries in your console and check.
To get total comments count
var comments = document.querySelector(".comment-section-header-text").innerText.split(" ")
//We can get the text value "Comments • 715" and split by spaces and get the last value
Number(comments[comments.length -1])
//Then convirt string "715" to int, you just need to do these in python - selenium
To get active comments count
$x("//ytm-comment-thread-renderer").length
Note: if it's hard to extract the values you still can use the selenium js executor and do the scrolling with js until all the comments are visible. But I guess it's not hard to do it in python since the logic is the same.
I'm really sorry about not being able to add the solution in python.
But hope this helped.
cheers.
The first thing you need to do is scroll down the video page to load all comments:
$actualHeight = 0;
$nextHeight = 0;
while (true) {
try {
$nextHeight += 10;
$actualHeight = $this->driver->executeScript('return document.documentElement.scrollHeight;');
if ($nextHeight >= ($actualHeight - 50 ) ) break;
$this->driver->executeScript("window.scrollTo(0, $nextHeight);");
$this->driver->manage()->timeouts()->implicitlyWait = 10;
} catch (Exception $e) {
break;
}
}

How do i use “wait until” element before certain page is displayed? (using python 3.7)

First of all, i am a newbie in testing app using appium (python 3.7). Here, i am testing an app where i have to wait right after login process is completed. I have done this using implicit wait. But now, to make the testing process more dynamic i want to wait until the next page is displayed.
Note: I have seen and tried several issues of this forum but could not help myself.
Here's the code:
from appium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
desired_cap = {
"platformName": "Android",
"deviceName": "QDG9X18426W11577",
"newCommandTimeout": "240",
"app": "C:\\Users\\tahmina\\Downloads\\test-v3.10.apk",
"appPackage": "com.cloudapper.android",
"appActivity": "com.cloudapper.android.SplashActivity"
}
#Making connection with Appium server
driver = webdriver.Remote("http://localhost:4723/wd/hub", desired_cap)
#Here i have used implicit wait to load the login page
driver.implicitly_wait(20)
#Login to the app
search_element = driver.find_element_by_id('Username').send_keys('test#yandex.com')
search_element = driver.find_element_by_id('Password').send_keys('1155qQQ')
search_element = driver.find_element_by_xpath(
'/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout[2]/android.webkit.WebView/android.webkit.WebView/android.view.View/android.view.View[2]/android.widget.Button').click()
wait = WebDriverWait(driver, 10)
#Waiting until the next process comes up
if webdriver.wait.until(driver.find_element_by_id('com.cloudapper.android:id/item_bg').is_displayed()):
print('Run the next process')
elif webdriver.wait.until(not driver.find_element_by_id('com.cloudapper.android:id/item_bg')):
print('Something went wrong!')
#Searching employee by using ID
search_element = driver.find_element_by_id('com.cloudapper.android:id/edtSearch').send_keys('1018')
driver.execute_script('mobile:performEditorAction', {'action': 'search'})
Guide me if anyone of you have any solution about it.
With Python, you can use WebDriverWait and ExpectedConditions to solve your problem.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
#Login to the app
search_element = driver.find_element_by_id('Username').send_keys('test#yandex.com')
search_element = driver.find_element_by_id('Password').send_keys('1155qQQ')
search_element = driver.find_element_by_xpath(
'/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout[2]/android.webkit.WebView/android.webkit.WebView/android.view.View/android.view.View[2]/android.widget.Button').click()
# Waiting until the next process comes up
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "com.cloudapper.android:id/item_bg")))
If you want to implement the WebDriverWait in a try / except block, you can handle the case where your desired element does not appear on the page:
from selenium.common.exceptions import TimeoutException
# Waiting until the next process comes up
try:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "com.cloudapper.android:id/item_bg")))
except TimeoutException as ex:
# handle the exception here
print("Exception has been thrown. " + str(ex))
Because you are using the explicit wait.until keyword, you should not set driver Implicit wait. It is bad practice to set both implicit and explicit wait in your automation tests, and can yield unexpected wait times.
On another note -- I noticed you are using explicit XPath notation in some of your selectors. XPath is a great method for selecting elements, but using explicit selectors as such makes your code very brittle. I recommend using relative selectors instead. You can replace this:
search_element = driver.find_element_by_xpath(
'/hierarchy/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout/android.widget.LinearLayout/android.widget.FrameLayout[2]/android.webkit.WebView/android.webkit.WebView/android.view.View/android.view.View[2]/android.widget.Button').click()
with this:
search_element = driver.find_element_by_xpath('//android.widget.Button').click()
You may have to query on additional attributes such as text --
search_element = driver.find_element_by_xpath('//android.widget.Button[#text='Log in']').click()
Overall, this method is much more efficient.
Have you tried something like this:
import time
time.sleep(0.5)
This is for anyone who face the same problem:
Import WebDriverWait and ExpectedConditions
Use this as explicit wait in your code
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "com.cloudapper.android:id/item_bg")))

Resources