Been trying to get this to work for a couple of hours now.
its my first real python project so yeh, Would love some help.
HTML
<input type="button" id="lyca_cart_newsim_button1" value="FORTSÆT" class="et_pb_more_button et_pb_button lyca_cart_topup_summary" onclick="nc_newsim_open_tab2('payment','sid','tid')">
xpath
//*[#id='lyca_cart_newsim_button1']
This produces error element "not interactable"
driver.find_element_by_xpath("//*[#id='lyca_cart_newsim_button1']").click()
This produces no errors but does not click the button
element = driver.find_element_by_xpath("//*[#id='lyca_cart_newsim_button1']")
driver.execute_script("arguments[0].click();", element)
This times out
WebDriverWait(driver, 10).until(EC.element_to_be_clickable
First one give element not interactable
and the second one gives no errors.
I am using this at a earlier point in the code and its working fine there,
element = driver.find_element_by_xpath("//*[#id='lyca_cart_newsim_button1']")
driver.execute_script("arguments[0].click();", element)
I've never had good luck using the expected conditions from selenium especially waiting for an element to be clickable. What I've done whether or not it's best, but it has worked is to have a loop attempt to click and to keep trying within a certain amount of time. Here is what I use in C#:
int timeToTryMilliseconds = 5000;
bool timeNotExpired = true;
Stopwatch sw = new Stopwatch();
sw.Start();
while (timeNotExpired)
{
try
{
driver.FindElement(By.XPath("//*[#id='lyca_cart_newsim_button1']").click()
break;
}
catch
{
// Half second wait, so it's not polling constantly
System.Threading.Thread.Sleep(500);
timeNotExpired = timeToTryMilliseconds > sw.ElapsedMilliseconds;
}
}
If there is a better way, I'd love to use it.
Please confirm either the button is in iframe tag. if it is in iframe you must to switch to iframe
if it is not in iframe then try using below code it might work
driver.execute_script("$('#lyca_cart_newsim_button1').click()");
Related
I am trying to loop through all the pages of a website. but I am getting a stale element reference: element is not attached to the page document error. This happens when the script try to click the third page. The script got the error when it runs to page.click(). Any suggestions?
while driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a')[-1].text=='...':
links=driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a')
for link in links:
if ((link.text !='...') and (link.text !='ADD DOCUMENTS')):
print('Page Number: '+ link.text)
print('Page Position: '+str(links.index(link)))
position=links.index(link)
page=driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a')[position]
page.click()
time.sleep(5)
driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a')[-1].click()
You can locate the link element each time again according to the index, not to use elements found initially.
Something like this:
amount = len(driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a'))
for i in range(1,amount+1):
link = driver.find_element_by_xpath("(//*[#id='jsGrid_vgAllCases']//a)["+str(i) +"]")
from now you can continue within your for loop with this link like this:
amount = len(driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a'))
for i in range(1,amount+1):
link = driver.find_element_by_xpath("(//*[#id='jsGrid_vgAllCases']//a)["+str(i) +"]")
if ((link.text !='...') and (link.text !='ADD DOCUMENTS')):
print('Page Number: '+ link.text)
print('Page Position: '+str(links.index(link)))
position=links.index(link)
page=driver.find_element_by_id('jsGrid_vgAllCases').find_elements_by_tag_name('a')[position]
page.click()
time.sleep(5)
(I'm not sure about the correctness of all the rest your code, just copy-pasted it)
I'm running into an issue with the Stale Element Exception too. Interesting with Firefox no problem, Chrome && Edge both fail randomly. In general i have two generic find method with retry logic, these find methods would look like:
// Yes C# but should be relevant for any WebDriver...
public static IWebElement( this IWebDriver driver, By locator)
public static IWebElement( this IWebElement element, By locator)
The WebDriver variant seems to work fine for my othe fetches as the search is always "fresh"... But the WebElement search is the one causing grief. Unfortunately the app forces me to need the WebElement version. Why he page/html will be something like:
<node id='Best closest ID Possible'>
<span>
<div>text i want</div>
<div>meh ignore this </div>
<div>More text i want</div>
</span>
<span>
<!-- same pattern ... -->
So the code get the closest element possible by id and child spans i.e. "//*[#id='...']/span" will give all the nodes of interest. This is now where i run into issues, enumerating all element, will do two XPath select i.e. "./div[1]" and "./div[3]" for pulling out the text desired. It is only in fetching the text nodes under the elements where randomly a StaleElement will be thrown. Sometimes the very first XPath fails, sometimes i'll go through a few pages, as the pages being might have 10,000's or more pages, while the structure is the same i'll spot check random pages as they all the same format. At most i've gotten through 20 consecutive pages with Chrome (ver 92.0.4515.107) or Edge (ver 94.0.986), both seem to be the latest as of now.
One solution that should work, get all the the span elements first, i.e. '//*[#id='x']/span' get my list then query from the driver like:
var nodeList = driver.FindElements(By.XPath('//*[#id='x']/span' ));
for( int idx = 0 ; idx < nodeList.Count; idx++)
{
string str1 = driver.FindElements(By.XPath("//*[#id='x']/span[idx+1]/div[1]")).GetAttribute("innerText");
string str2 = driver.FindElements(By.XPath("//*[#id='x']/span[idx+1]/div[3]")).GetAttribute("innerText");
}
```
Think it would work but, YUK! This is kind of simplified and being able to do an XPath from the respective "ID" located node would be preferable..
I know there have been several questions asked regarding stale elements, but I can't seem to resolve these.
My site is private so unfortunately can't share, but seems to always throw the error somewhere within the below for-loop. This loop is meant to get the text of each row in a table (number of rows varies). I've assigned WebDriverWait commands and have a very similar for-loop earlier in my code to do the same thing in another table on the website which works perfectly. I've also tried including the link click command and table, body, and tableText definition inside the loop to redefine at every iteration.
Once the code stops and the error message displays (stale element reference: element is not attached to the page document (Session info: chrome=89.0.4389.128)), if I manually run everything line-by-line, it all seems to work and correctly grabs the text.
Any ideas? Thanks!
link = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.LINK_TEXT, "*link address*")))
link.click()
table = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, "TableId")))
body = tableSig.find_element(By.CLASS_NAME, "*table body class*")
tableText = body.find_elements(By.TAG_NAME, "tr")
rows = len(tableText)
approvedSigs = [None]*rows
for i in range(1, rows+1):
approvedSigs[i-1] = (tableText[i-1].text)
approvedSigs[i-1] = approvedSigs[i-1].lstrip()
approvedSigs[i-1] = approvedSigs[i-1][9:]
approvedSigs[i-1] = approvedSigs[i-1].replace("\n"," ")
I want to make a Chrome Bookmarklet which open a new tab with specific action.
To be more exact I want to have a fixed URL like "https://www.blablabla.com/search=" inside the bookmarklet and when I press it, I want a popup window to appear with an input field.
When I type something in the input field and press enter or OK/submit it should "run" the whole link plus my query.
For example, I press the bookmarklet, the input field appears and input the word "test" (without the quotes).
When I press submit the query, a new tab will open with the address of https://www.blablabla.com/search=test as the URL.
How do I do that?
I tried with prompt function but I can't get it work...
My question is a bit similar to How do I get JavaScript code in a bookmarklet to execute after I open a new webpage in a new tab?.
Although it remains unclear what exact issue you encounter, try the following bookmarklet:
javascript:(function() {
var targetUrl = "http://www.blablabla.com/search=";
new Promise (
(setQuery) => {var input = window.prompt("ENTER YOUR QUERY:"); if (input) setQuery(input);}
)
.then (
(query) => window.open(targetUrl + query)
);
})();
If it doesn't work, you should provide the problem description in more detail.
#Shugar's answer is mostly correct, but you don't need the promise.
javascript:(function() {
var targetUrl = "http://www.blablabla.com/search=";
var input = window.prompt("ENTER YOUR QUERY:");
if (input)
window.open(targetUrl + input)
})();
javascript:void(window.open('http://www.URL.com/'+prompt ('Enter your Query:')));
I hope this helps. Works for me and is much simpler code than I see above. (as long as we reach the end result, that is all that matters right?)
I'm trying to read a dynamic table, which is updated 1-3 times per second. I'm using Selenium, in Python 3.x, but if you have a solution for other languages I can work it out as well.
My question is: what is the best practice for reading frequently updated tables?
What I've tried:
driver.wait.until along with expected_conditions
re-read the table with a call to find_elements if a stale exception is thrown
Neither of them is working, due to the high refreshing rate. I can successfully retrieve the table for a moment, but when I try to access its rows the moment after, I get a stale exception. It's worth to say that when I try the same code in the same table when there are less frequent updates everything works fine.
I'm not posting any code for the moment, as I'd be interested in knowing what more experienced people do in this case.
My naive thinking: Being non-expert (but keen to learn) in web scraping nor in any web-related languages, I'd say that if this was a problem with dynamic data, I'd take a pointer or a reference to the actual table (and then looping dynamically on the rows). Is that possible in this framework?
We usually get stale element exception when the Webelement has been changed at present when compared to its attributes at the time of webelement's creation.
Let's say the intent is to print second data element in a table every seconds, our code looks like this, (Sorry for giving the code in Java)
//This will work if the page is static
WebElement element = driver.findElement(By.xpath("//td[2]"));
for(int i = 0; i< 10;i++)
{
System.out.println(element.getText());
Thread.sleep(1000);
}
To make this work for dynamic loading tables / refreshing tables we need to initiate the webelement before the each iteration something like this,
//This will work for dynamic content
WebElement element = null;
for(int i = 0; i< 10;i++)
{
element = driver.findElement(By.xpath("//td[2]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
In the case, if you need to get the i'th cell value in a table, we can parameter the value inside the xpath such as,
//In this case we need the fifth cell value
int j = 5;
WebElement element = null;
for(int i = 0; i< 10;i++)
{
element = driver.findElement(By.xpath("//td["+j+"]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
In the case if you need to have all five cell values,
WebElement element = null;
for(int i = 1; i<=5;i++)
{
element = driver.findElement(By.xpath("//td["+i+]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
Just construct a loop accordingly.
Hope this helps you. Thanks.
You can see Modernizr #1030 for more of a background, but essentially Firefox OS returns type: text on <input type="time"> elements.
It seems like it is the result of this bug, but the cause of the bug hasn't been found as of the time of this post.
Is there any way to properly detect input type="time"?
A tip has been given on an another StackOverflow post :
function isDateSupported() {
var i = document.createElement("input");
i.setAttribute("type", "date");
return i.type !== "text";
}
You can try:
yourDOMNode.getAttribute('type') === 'time'
Works for me on the browser of Firefox OS 1.1 (lg fireweb).
Here is a very ugly example I've assembled quickly to test (typed on my iPad keyboard, sorry): http://jsfiddle.net/Bh2pw/7/
The first button returns text while the second returns time.