Retrieving elements with custom HTML attributes - python-3.x

I have the following website: https://www.kvk.nl/handelsregister/publicaties/, where I would like to retrieve the login link with Selenium, Scrapy and Python. So for the relevant function, I have the following code:
def start_requests(self):
self.driver = webdriver.Chrome(executable_path=os.path.join(os.getcwd(), "Drivers", "chromedriver.exe"))
self.driver.get(self.initial_url)
test = access_page_wait.until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, 'a[data-ui-test-class="linkCard_toegangscode"]')))
if test.is_displayed():
print("+1")
else:
print("-1")
However, this does not seem to work, since it just waits 15 seconds and then it stops. It will never reach +1 or -1.
Now my question is, how can we point selenium to the correct element. It also does not seem to work using XPATH find_elements_by_xpath("//a[#data-ui-test-class='linkCard_toegangscode']").
Should I use another selection approach and if yes, which one?

Because there is Frame which stopping you to access the element.Switch_To iframe and then access the element.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
import os
driver = webdriver.Chrome(executable_path=os.path.join(os.getcwd(), "Drivers", "chromedriver.exe"))
driver.get("https://www.kvk.nl/handelsregister/publicaties/")
driver.switch_to.frame(0)
test=WebDriverWait(driver,10).until(expected_conditions.visibility_of_element_located((By.CSS_SELECTOR, 'a[data-ui-test-class="linkCard_toegangscode"]')))
if test.is_displayed():
print("+1")
else:
print("-1")
Try the above code.It should print what you are looking after.

Related

Python Selenium Locate Element within P tags by XPATH

On the following website (https://play2048.co/) is a game called 2048. After I use Selenium to play it eventually gets to a point where is says Game Over!
When I inspect the element (see picture) I see the word Game Over! in p tags. I wish to be able to capture this text and store it in a Python variable as below:
TextV = wait.until(EC.visibility_of_element_located((By.XPATH, '/html/body/div[2]/div[3]/div[1]/p'))).text
I got the XPath by simply right clicking the text in the p tags (inspect element) and selected copy full XPath but it seems Selenium cannot find this text element when I run the whole code.
Thannks
Personally I use CSS_Selector, because it tends to be more correct, and it's more likely to be successful.
So i would recommend something along the lines:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Your Code Here
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR,'the css selector'))).text

How find an specific element of page in Selenium python, because I can't find them

I have tried all ways that I knew, but can't found elements "login" and "password". Could someone help me, please? I am tired...
link: https://trading.finam.ru/
from selenium import webdriver
import traceback
from time import sleep
url = 'https://trading.finam.ru/'
driver = webdriver.Chrome(executable_path=r"C:\Users\Idensas\PycharmProjects\pythonProject\selenium\chromedriver.exe")
driver.get(url)
sleep(2)
try:
x = driver.find_element_by_xpath('//label[contains(text(),"Логин")]')
x.send_keys('123')
except Exception:
traceback.print_exc()
finally:
driver.close()
driver.quit()
The picture of that site
The following xpaths will work:
//input[#name='login']
//input[#name='password']
I see the site is loaded slowly so you must put some delay, preferably explicit wait there like:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait.until(EC.visibility_of_element_located((By.XPATH, "//input[#name='login']")))
before trying to insert a texts there
Any website takes sometime to completely load and all the elements are visible to user.
To check the time take taken by your website in different regions you run a check on https://gtmetrix.com/
You can use the simple sleep for few secs until the page is loaded
from selenium import webdriver
import time
driver=webdriver.Chrome(//chromedriver_path_in_your_local//)
driver.get("https://trading.finam.ru/")
time.sleep(5)
login=driver.find_element_by_xpath("//input[#name='login']")
password=driver.find_element_by_xpath("//input[#name='password']")

Web scraping of a betting site with selenium. Incomplete event list

I wrote a program to get some odds from the "Eurobet.it" site using selenium to open the page containing the "ChanceMix". It seems that it works, but when the games take place in more than one date the list I get is incomplete. It contains only those of the first/second date, but not the next ones. I tried various solutions but without results. Can someone help me?
This is my code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import ui
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
link = 'https://www.eurobet.it/it/scommesse/#!/calcio/eu-champions-league/'
driver = webdriver.Chrome()
driver.get(link)
# Find the "ChanceMix" button by xpath and store it as a webdriver object
element = driver.find_element_by_xpath("//a[contains(text(), 'ChanceMix')]")
# Click on the "ChanceMix" button to open the relevant page
element.click()
# Find the list of games
Games = ui.WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.box-sport")))
# Print the list of games
for x in range(0,len(Games)): print(Games[x].text)

how to press enter in selenium python?

I want to search a special keyword in Instagram. For example, I want to search this word:"fast food". I can send this key in search box. But, when I use submit method in Selenium by Python3, it doesn't work and give me error. This is my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.alert import Alert
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support import expected_conditions as EC
import time
driver = webdriver.Firefox()
url="https://www.instagram.com/p/pTPI-kyX7g/?tagged=resurant"
driver.get(url)
#login_button=driver.find_element_by_xpath("""/html/body/span/section/main/article/div[2]/div[2]/p/a""")
#login_button.click()
import time
driver.implicitly_wait(5)
search_button=driver.find_element_by_xpath("""/html/body/span/section/nav/div[2]/div/div/div[2]/input""")
search_button.send_keys("fast food")
search_button.submit()
This is gave error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: ./ancestor-or-self::form
Could you help me?
Instead of. submit() try '.send_keys(u'\ue007')'
See: http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.keys
It need more clicks:
search_button=driver.find_element_by_xpath("""/html/body/span/section/nav/div[2]/div/div/div[2]/input""")
search_button.send_keys("fast")
while True:
search_button.send_keys(u'\ue007')
That is very interesting. Another solution is better. It is very important to know that, you must press 2 Enter press on Instagram search box. So, for eliminating while loop, you can use this code:
search_button.send_keys("#fast")
time.sleep(5)
search_button.send_keys(u'\ue007')
search_button.send_keys(u'\ue007')

Selenium WebDriverWait returns an error on Python while web scraping

I am creating a script for Facebook scraping.
I am using Selenium 2.53.6, Gecko driver 0.11.1 with Firefox 50 under Ubuntu 14.04.5 LTS. I have also tried it under Windows 10 using Gecko as well as Chromedriver too, with the same results (that I will describe below) :(
The following code fragment I use that I copied from the original documentation in section Explicitly Waits is:
import datetime, time, sys, argparse
from time import strftime
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
...
...
def main(usrEmail, pwd, numberOfScrolls, secondsWait, searchItem, outFilename):
# Open Firefox Browser
binary = FirefoxBinary('/usr/bin/firefox')
browser = webdriver.Firefox(firefox_binary=binary)
#put browser in specific position
browser.set_window_position(400, 0)
# goto facebook
browser.get("http://www.facebook.com")
#waiting( 5 )
try:
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
finally:
logging("logging in...")
The problem is that I get this error:
File "fb.py", line 86, in main
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "u_0_p")))
NameError: name 'By' is not defined
As a workaround I am just using delays in a form time.sleep( delay ) and the program works pretty nice, but it would be better if I had the WebDriverWait feature.
Do I miss something obvious or it is a kind of selenium bug or a web browser feature?
Any help / clarification will be greatly appreciated!
Thanks for your time!!
Could be your missing the definition of By:
from selenium.webdriver.common.by import By

Resources