Unable to fetch web elements in pytest-bdd behave in python - python-3.x

So, I'm in the middle of creating a test-automation framework using pytest-bdd and behave in python 3.10.
I have some codes, but the thing is, I'm not able to fetch the web element from the portal. The error doesn't say anything about it. Here is the error I'm getting in the console.
Let me share the codes here too for better understanding.
demo.feature
Feature: Simple first feature
#test1
Scenario: Very first test scenario
Given Launch Chrome browser
When open my website
Then verify that the logo is present
And close the browser
test_demo.py
from behave import *
from selenium import webdriver
from selenium.webdriver.common.by import By
# from pytest_bdd import scenario, given, when, then
import time
import variables
import xpaths
from pages.functions import *
import chromedriver_autoinstaller
# #scenario('../features/demo.feature', 'Very first test scenario')
# def test_eventLog():
# pass
#given('Launch Chrome browser')
def given_launchBrowser(context):
launchWebDriver(context)
print("DEBUG >> Browser launches successfully.")
#when('Open my website')
def when_openSite(context):
context.driver.get(variables.link)
# context.driver.get(variables.nitsanon)
print("DEBUG >> Portal opened successfully.")
#then('verify that the logo is present')
def then_verifyLogo(context):
time.sleep(5)
status = context.driver.find_element(By.XPATH, xpaths.logo_xpath)
# status = findElement(context, xpaths.logo_xpath)
print('\n\n\n\n', status, '\n\n\n\n')
assert status is True, 'No logo present'
print("DEBUG >> Logo validated successfully.")
#then('close the browser')
def then_closeBrowser(context):
context.driver.close()
variables.py
link = 'https://nitin-kr.onrender.com/'
xpaths.py
logo_xpath = "//*[#id='logo']/div"
requirements.txt
behave~=1.2.6
selenium~=4.4.3
pytest~=7.1.3
pytest-bdd~=6.0.1
Let me know if you need any more information. I'm very eager to create an automation testing framework without any OOPs concept used.
Just the thing is, I'm not able to fetch the web elements. Not able to use find_element(By.XPATH, XPATH).send_keys(VALUE) like methods of selenium.

Related

How to use Device Farm desktop browser session with Python

I'm trying to run a Selenium test in Python using Device Farm desktop browser session, but with the lack of resources (official or not), and my lack of knowledge, I can't figure it out.
I used these documentations:
https://docs.aws.amazon.com/devicefarm/latest/testgrid/getting-started-migration.html
https://selenium-python.readthedocs.io/getting-started.html#simple-usage
I installed the GeckoDriver, and ran the following code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
I saw a web browser appear for about a second.
I then decided to use Device Farm. I setup my AWS env vars, tested the connectivity, and ran the following code:
import boto3
import pytest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
class test_url:
def setup_method(self, method):
devicefarm_client = boto3.client("devicefarm", region_name="eu-west-1")
testgrid_url_response = devicefarm_client.create_test_grid_url(
projectArn="arn:aws:devicefarm:us-west-2:1234567890:testgrid-project:some-id-string",
expiresInSeconds=300)
self.driver = webdriver.Remote(
"http://www.python.org", webdriver.DesiredCapabilities.FIREFOX)
# later, make sure to end your WebDriver session:
def teardown_method(self, method):
self.driver.quit()
Here's the result:
$ pytest -s
====================================================================================== test session starts =======================================================================================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/eric/nuage/devicefarm-poc
collected 0 items
===================================================================================== no tests ran in 0.07s ======================================================================================
I saw nothing happen in the AWS Management Console.
Why did no test run? Shouldn't this code perform an URL test? Shouldn't something happen in the AWS Management Console when I run this?
There appears to be a few issues with your code.
According to the pytest documentaion it seems like you need to put your tests into a file starting with the name test and to put your tests in methods starting with the word test as well. This is why none of your code is executing.
The line driver = webdriver.Firefox() tries to create a local firefox driver. What you want is a remote driver using the URL that AWS Device Farm provides (which you do at the line self.driver = webdriver.Remote("http://www.python.org", webdriver.DesiredCapabilities.FIREFOX)
The line self.driver = webdriver.Remote("http://www.python.org", webdriver.DesiredCapabilities.FIREFOX) is incorrect. The first argument is supposed to be the URL of the remote endpoint used to execute your tests. In this case, its AWS Device Farm's endpoint that is given in the CreateTestGridUrl API response. Selenium is basically just a REST service, so it performs actions via REST calls to an endpoint that tells the driver which actions to perform.
AWS Device Farm is currently only in us-west-2.
I suggest you go through the pytest, Selenium, and AWS docs again to understand how it all works together. Its not too complex, but it may get confusing if you do not know how all the working parts interact with each other.
Here's a "minimal" example with pytest to get you started.
import logging
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.keys import Keys
import boto3
import pytest
PROJECT_ARN = # Your project ARN
# Currently, AWS Device Farm is only in us-west-2
devicefarm = boto3.client('devicefarm', region_name='us-west-2')
remote_url = devicefarm.create_test_grid_url(
projectArn=PROJECT_ARN,
expiresInSeconds=600 # 10 minutes. Increase to longer if needed
)['url']
#pytest.fixture(scope="module") # Specify "module" to reuse the same session
def firefox_driver(request):
# Start fixture setup
logging.info("Creating a new session with remote URL: " + remote_url)
remote_web_driver = webdriver.Remote(command_executor=remote_url, desired_capabilities=DesiredCapabilities.FIREFOX)
logging.info("Created the remote webdriver session: " + remote_web_driver.session_id)
yield remote_web_driver # Returns driver fixture and waits for tests to run
logging.info("Teardown the remote webdriver session: " + remote_web_driver.session_id)
remote_web_driver.quit()
logging.info("Done tearing down")
#pytest.mark.usefixtures("firefox_driver")
def test_search_in_python_org(firefox_driver):
driver = firefox_driver
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
# driver.close() // This is done in the fixture instead of here now
#pytest.mark.usefixtures("firefox_driver")
def test_aws_console_title(firefox_driver):
driver = firefox_driver
driver.get("https://aws.amazon.com/")
assert "Amazon Web Services" in driver.title
if __name__ == '__main__':
unittest.main()

How can I retrieve data from a web page with a loading screen?

I am using the requests library to retrieve data from nitrotype.com/racer/insert_name_here about a user's progress using the following code:
import requests
base_url = 'https://www.nitrotype.com/racer/'
name = 'test'
url = base_url + name
page = requests.get(url)
print(page.text)
However my problem is that this retrieves data from the loading screen, I want the data after the loading screen.
Is it possible to do this and how?
This is likely because of dynamic loading and can easily be navigated by using selenium or pyppeteer.
In my example, I have used pyppeteer to spawn a browser and load the javascript so that I can attain the required information.
Example:
import pyppeteer
import asyncio
async def main():
# launches a chromium browser, can use chrome instead of chromium as well.
browser = await pyppeteer.launch(headless=False)
# creates a blank page
page = await browser.newPage()
# follows to the requested page and runs the dynamic code on the site.
await page.goto('https://www.nitrotype.com/racer/tupac')
# provides the html content of the page
cont = await page.content()
return cont
# prints the html code of the user profiel: tupac
print(asyncio.get_event_loop().run_until_complete(main()))

Why doesn't my Selenium session stay logged in?

I'm working on using selenium to sign into GitHub and create a repository. A similar project that I had found, after login used "https://github.com/new" to go to the repo creation page. However, when I try to do that, it returns to an empty login page with the following url: "https://github.com/login?return_to=https%3A%2F%2Fgithub.com%2Fnew".
I'm pretty lost and haven't found a good reason yet for why this would be happening.
PS. This is connected to a shell script, but the shell script is doing what it's supposed to so I didn't attach that code along with the Python that's being troublesome.
import sys
from selenium import webdriver
browser = webdriver.Safari()
def createProj():
folder_name = str(sys.argv[0])
browser.get("https://github.com/login")
try:
login_button = browser.find_elements_by_xpath("//*[#id='login_field']")[0]
login_button.click()
login_button.send_keys("Insert email here")
pass_button = browser.find_elements_by_xpath("//*[#id='password']")[0]
pass_button.click()
pass_button.send_keys("Insert password here")
submit_button = browser.find_elements_by_xpath("//*[#id='login']/form/div[4]/input[9]")[0]
submit_button.click()
except:
print("You're already signed in, no need to log in.")
browser.get("https://github.com/new")
if __name__ == "__main__":
createProj()

trouble getting the current url on selenium

I want to get the current url when I am running Selenium.
I looked at this stackoverflow page: How do I get current URL in Selenium Webdriver 2 Python?
and tried the things posted but it's not working. I am attaching my code below:
from selenium import webdriver
#launch firefox
driver = webdriver.Firefox()
url1='https://poshmark.com/search?'
# search in a window a window
driver.get(url1)
xpath='//input[#id="user-search-box"]'
searchBox=driver.find_element_by_xpath(xpath)
brand="freepeople"
style="top"
searchBox.send_keys(' '.join([brand,"sequin",style]))
from selenium.webdriver.common.keys import Keys
#EQUIValent of hitting enter key
searchBox.send_keys(Keys.ENTER)
print(driver.current_url)
my code prints https://poshmark.com/search? but it should print: https://poshmark.com/search?query=freepeople+sequin+top&type=listings&department=Women because that is what selenium goes to.
The issue is that there is no lag between your searchBox.send_keys(Keys.ENTER) and print(driver.current_url).
There should be some time lag, so that the statement can pick the url change. If your code fires before url has actually changed, it gives you old url only.
The workaround would be to add time.sleep(1) to wait for 1 second. A hard coded sleep is not a good option though. You should do one of the following
Keep polling url and wait for the change to happen or the url
Wait for a object that you know would appear when the new page comes
Instead of using Keys.Enter simulate the operation using a .click() on search button if it is available
Usually when you use click method in selenium it takes cared of the page changes, so you don't see such issues. Here you press a key using selenium, which doesn't do any kind of waiting for page load. That is why you see the issue in the first place
I had the same issue and I came up with solution that uses default explicit wait (see how explicit wait works in documentation).
Here is my solution
class UrlHasChanged:
def __init__(self, old_url):
self.old_url = old_url
def __call__(self, driver):
return driver.current_url != self.old_url:
#contextmanager
def url_change(driver):
current_url = driver.current_url
yield
WebDriverWait(driver, 10).until(UrlHasChanged(current_url))
Explanation:
At first, I created my own wait condition (see here) that takes old_url as a parameter (url from before action was made) and checks whether old url is the same like current_url after some action. It returns false when both urls are the same and true otherwise.
Then, I created context manager to wrap action that I wanted to make, and I saved url before action was made, and after that I used WebDriverWait with created before wait condition.
Thanks to that solution I can now reuse this function with any action that changes url to wait for the change like that:
with url_change(driver):
login_panel.login_user(normal_user['username'], new_password)
assert driver.current_url == dashboard.url
It is safe because WebDriverWait(driver, 10).until(UrlHasChanged(current_url)) waits until current url will change and after 10 seconds it will stop waiting by throwing an exception.
What do you think about this?
I fixed this problem by clicking on the button by using href. Then do driver.get(hreflink). Click() was not working for me!

Chrome alert pop up not detected in Selenium

I am having issues with handling Authentication pop up in Chrome via Selenium.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.get('URL')
time.sleep(5)
alert = driver.switch_to_alert()
alert.send_keys('Username')
alert.send_keys(Keys.TAB)
alert.send_keys('Password')
This returns an error--
"selenium.common.exceptions.NoAlertPresentException: Message: no alert open"
Alternatively, I also tried the following code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.get('https://Username:Password#URL')
The second code works partially-
In Chrome the user is logged in but the page does not load. Only a blank page is displayed. Once the blank page is loaded, i passed only the URL(without user credentials) and it works fine.
In Firefox, the webpage loads perfectly.
Basically, the issue is with Chrome.
Any help is appreciated.
Thanks!
The second code you have tried is correct with a slight mistake.
public void loginToSystem(String username,String password, String url){
driver = webdriver.Chrome()
driver.get("https://"+username+":"+password+"# "+URL);
}
First try to wait for alert present.
wait = WebDriverWait(driver, 10)
wait.until(EC.alert_is_present())
if alert is present then only move forword to next step.
And also can you please check the screenshot if alert is actually present when testcase is failing.

Resources