how to select the first entry in jira using python selenium - python-3.x

I'm new to python. I want to select the first entry of my query list in Jira using python selenium.
import selenium
chrome_path = "C:\Python37-32\chromedriver.exe"
from selenium import webdriver
driver = webdriver.Chrome(chrome_path)
driver.get('https://jira.xxxx.com/issues/?filter=24005')
driver.find_element_by_xpath('//*[#id="user-options"]/a').click()
driver.find_element_by_xpath('//*[#id="login-form-username"]').send_keys('xxxx')
driver.find_element_by_xpath('//*[#id="login-form-password"]').send_keys('xxxx')
driver.find_element_by_xpath('//*[#id="login-form-submit"]').click()
driver.find_element_by_xpath('//*#id="content"]/div[1]/div[3]/div/div/div/div/div/div/div[1]/div[1]/span/span[3]').text
need to select every entry in the query list.
Kindly help
Thanks in advance

If you're able I would recommend using the Jira API for something like this (are you using cloud)? https://jira.readthedocs.io/en/master/installation.html
You would use your filter https://domain.atlassian.net/issues/?filter=24005 and you would have gotten all issues in this API request.
Example:
from jira import JIRA
issue_list = jira.search_issues("filter=24005")
for issue in issue_list:
DO SOMETHING

Related

How to find element with selenium on python?

import os
import selenium
from selenium import webdriver
import time
browser = webdriver.Chrome()
browser.get('https://www.skysports.com/champions-league-fixtures')
time.sleep(7) #So page loads completely
teamnames = browser.find_element_by_tag("span")
print(teamnames.text)
seems find_element attribute is changed on selenium :/
i also want to find all <img on another local website ( images url ) , appreciate if you can help.
Replace teamnames = browser.find_element_by_tag("span")
with teamnames = browser.find_element_by_tag_name("span")
Try to find elements instead of element, because in DOM Tags are always considered multiple.
Example:
browser.find_elements_by_tag_name('span')
Also, not that it will return a list of elements you need to traverse to access properties further.
Seems selenium made some changes in new version:
from selenium.webdriver.common.by import By
browser = webdriver.Firefox()
browser.get('url')
browser.find_element(by=By.CSS_SELECTOR, value='')
You can also use : By.ID - By.NAME - By.XPATH - By.LINK_TEXT - By.PARTIAL_LINK_TEXT - By.TAG_NAME - By.CLASS_NAME - By.CSS_SELECTOR
I used these in Python 3.10 and now its working just fine

Automating web based Qlikview using selenium python

I am pretty new to this selenium module in python.
I am trying to automate web based qlikview with selenium and have somewhat managed to do it but facing one problem.
The problem is:
I am able to change tabs and apply filter and export data from the qlickview by typing in the commands in the pycharm console (command prompt) but when i put together the commands and run the code, the code is not executing in the desired manner?
Can somebody help me regarding this?
from selenium import webdriver
driver = webdriver.Chrome()
link = 'some link'
driver.get(link)
driver.implicitly_wait(15)
click_1 = driver.find_element_by_xpath("path").click()
driver.implicitly_wait(2)
click_2 = driver.find_element_by_xpath('path').click()
It will be very appreciated. Please suggest.

get 50 search results per search in selenium python

When I open a chrome window from selenium and make a get request, the number of results that are being displayed are 10 on google.
Is there a way to get 50 results per page?
Or is there a way to modify the string such that it will fetch 50 results?.
Below is my code, please help me
from selenium import webdriver
chrome_path = r"C:\Users\Desktop\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
url = 'https://www.google.com/search?q=some random string'
driver.get(url)
Now, when I execute the above code, it opens chrome and fetches 10 results.
Is there a way to get 50 results out of the search string?
If not in selenium, is there any other way to achieve it?
Please help me.
Thanks in advance
Here's it:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get(
"https://www.google.com/search?q=some random string&num=200&start=0&sourceid=chrome&ie=UTF-8")

Unable to locate the referenced ID using selenium and python

I am trying to locate a search box with id as (search2) from a website. I have been able to successfully login to the website using the below code.
import requests
from tqdm import tqdm
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(executable_path=r'C:\chromedriver_win32\chromedriver.exe', chrome_options=options)
driver.implicitly_wait(30)
tgt = "C:\\mypath"
profile = {"plugins.plugins_list": [{"enabled":False, "name":"Chrome PDF Viewer"}],
"download.default_directory" : tgt}
options.add_experimental_option("prefs",profile)
print(options)
driver.get("http://mylink.com/")
user=driver.find_element_by_id("username")
passw=driver.find_element_by_id("password")
user.send_keys("abc#xyz.com")
passw.send_keys("Pwd")
driver.find_element_by_xpath('/html/body/div[2]/div/div/div[2]/form/div[3]/button').click()
page=driver.find_element_by_id("search2")
print(page)
The code works perfectly till here but the moment I add the below to it I get an error
page.send_keys("abc")
The error that I get is as below.
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
What I am trying to do here is login to the website and search for some items and download the results. I have already tried using the implicitly wait options as mentioned in the code. Any help would be highly appreciated.
Adding the below piece of code did the trick. Had to make the current thread sleep while the program continues to run the next steps.
time.sleep(5)
Thanks everyone.

I want to extract the latitude, longitude and accurency from https://mycurrentlocation.net/ using Python3 BeautifulSoup and requests modules

My basic idea is to make a simple script in order to get geographic coordinates using this site: https://mycurrentlocation.net/.
When I run my script the attribute column is empty and the program doesn't return the list correctly : check the list
Please help me! :)
import requests,time
from bs4 import BeautifulSoup as bs
site="https://mycurrentlocation.net/"
def track_the_location():
page=requests.get(site)
page=bs(page.text,"html.parser")
latitude=page.find_all("td",{"id":"latitude"})
longitude=page.find_all("td",{"id":"longitude"})
accuracy=page.find_all("td",{"id":"accuracy"})
List=[latitude.text,longitude.text,accuracy.text]
return List
print(track_the_location())
I think the problem is the fact, that you are running it from a local script. This behaves different from a browser as it doesn't provide any location information to the website. The needed information are provided through runtime, so you actually need to simulate a browser session to get the data as long as they don't offer an API, where you can manually specify your information.
A possible solution for that would be selenium as it helps you simulating a browser session. Here's the selenium documentation for further readings.
Hope I could help you. Have a nice day.

Resources