Intermittent Selenium element not found, even after reporting element present - object

I am having an issue in Selenium which has been recurring through my test suite that reports elements not present after they clearly are, and Selenium even reports that they are via isElementPresent().
For example, I have a page loading with a text field. After selenium.waitForPageToLoad() is complete, selenium intermittently reported the text field missing. To find out if this was
a timing issue, I added a Thread.sleep(5000) after the page loads, and to verify the
element is present:
logger.debug("Element present status: " + selenium.isElementPresent(elements.get("File Path Text Field")));
The strange thing is, every time I run the script, the page loads with no problems. During the
5 second sleep I can clearly see the text field. EVERY TIME the logger statement above reports
"true" for the element being present. The very next line of code is
selenium.type("something");
and it's a craps shoot whether or not it reports the element being present or not. Has anybody else had problems like this and know how to resolve them?
Thanks in advanced.

If it is an intermittent issue and selenium is finding the element more often than not, then it should be a timing issue. Following are the two steps I would try for this issue:
Change isElementPresent to isVisible. Many a times, I have seen that isElementPresent passes where isVisible fails.
Assuming you are using an IDE for coding the tests, use a breakpoint before the selenium.type command and execute the command. If selenium can 'ALWAYS' find the element during this breakpoint execution, then be rest assured that you have a timing problem.

Try selenium.type(elements.get("File Path Text Field"), "something");
As selenium.type method expects two parameters viz. locator and value.

Related

JMETER - WebDriver Sampler - Groovy - Dynamic Name (3 Level)

Thank you for your prompt responses. I have tried the below codes but looks like not picking up the values because there's 3 level of variables. Can you please advise? Thanks.
1st level: xpath=(//input[#type='text'])[7]
2nd level
it doesn't work: //li[contains(#id, 'cascader-menu')]/span
or
it doesn't work: //li[contains(#id,'cascader-menu')]/span1
I don't think anyone is capable of coming up with a proper unique locator by looking at the screenshot without having access to full DOM or application
From what I can see so far you need the following element:
//input[#placeholder='Select...']
However it may or may not work depending on:
whether there is another input element matching this query, if there are more than one - the action goes to the first match
it is visible
it can be interacted with (i.e. not covered by a modal window or not disabled)
phase of the moon
You can test your expressions using your browser developer tools and given you will get the match - Selenium will also be able to find the element and hopefully work with it.
If you going to continue repeatedly ignoring my suggestions of getting familiarized with DOM and XPath concepts I can only suggest recording the scenario using Selenium IDE or JMeter Chrome Extension and hope that it could be replayed without modifications.

How to troubleshoot "We couldn’t find a run python"?

I'm working on a pre-existing python code-by-zapier zap. The trigger is "Code By Zapier; Run Python". I've made some changes to the contained python script, and now when I go to test that step I run into the following error message:
We couldn’t find a run python
Create a new run python in your Code by Zapier account and test your trigger again.
Is there any way of figuring out what went wrong?
I'm guessing a little bit, but I think this issue stems from repeatedly testing an existing trigger without returning a new ID.
When you run a test (or click the "load more" button), then Zapier runs the trigger and looks through the array for any new items it hasn't seen before. It bases "newness" on whether it recognizes the id field in each returned object.
So if you're testing code that changed, but is returning objects with previously seen ids, then the editor will error saying that it can't find any new objects (the can't find new run pythons is a quirk of the way that text is generated; think of it as "can't find objects that we haven't seen before).
The best way to fix this depends on if you're returning an id and if you need it for something.
Your code can return a random id. This means every returned item will trigger a Zap every time, which may or may not be intended behavior.
You can probably copy your code, change the trigger app (to basically anything else), run a successful test (which will overwrite your old test data), and then change it back to Code by Zapier and paste your code. Then you should get a "fresh" test. Due to changes in the way sample data is stored, I'm not positive this works now
Duplicate the zap from the "My Zaps" page. The new one won't have any existing sample data, so you should be able to test normally.

Python selenium "Timeout Exception" error

I am trying to click on the Financials link of the following URL using Selenium and Python.
https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/
initially I used the following code
link = driver.find_element_by_link_text('Financials')
link.click()
Sometimes this works and sometimes it doesn't and I get the Element is not Clickable at point (X,Y) error. I have added code to maximise the webpage in case the link was getting overlapped by something.
It seems that the error is because the webpage doesn't always load in time. To overcome this I have been trying to use expected conditions and wait libraries of Selenium.
I came up with the following but it isn't working, I just get TimeoutException.
link = wait(driver, 60).until(EC.element_to_be_clickable((By.XPATH,'//*[#id="zbCenter"]/div/span/table[3]/tbody/tr/td/table/tbody/tr/td[8]/nobr/a/b')))
link.click()
I think XPATH is probably the best choice here or perhaps class name, but there is no ID. I'm not sure if its because the link is inside some table that it isn't working, but it seems odd to me that sometimes it works without having to wait at all.
I have tried Jacob's approach. The problem is I want it to be dynamic so it will work for other companies. Also, when I first land on the summary page the URL has other things at the end so I can't just append /financials to the URL.
This is the URL it gives me: https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/?type_recherche=rapide&mots=DG
I might have find a way around this:
link = driver.current_url.split('?')[0]
How do I then access this list item and append the string 'financial/' to the list item?
I was looking for a solution when I noticed that clicking on the financial tab takes you to a new URL. In this case I think the simplest solution is just to use the .get() method for that URL.
i.e.
driver.get('https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/financials/')
This will always take you directly to the financial page! Hope this helps.

Blue Prism: Object not found when ran using Control Center, but runs without issue through Process Studio

During my process there is a drop-down html element that I have spied and set correctly. I run through the Object and Process studio without any issues. Once the same process is ran through the Control Room the element throws and error that it cannot be found.
I have tried multiple different configurations without any luck and the element is still found without issue when ran manually. I even checked by signing into the VM having it error and on the second retry had the VM up; at this stage the element was found without issue. It seems to only be when the bot is running and the screen is not up. No other elements give this issue and the next step is the same dropdown, but for a stop time not start.
Any help would be appreciated!
I have added pics of the STARTSPIED - START and ENDSPIED - END spied configs,the Navigation Stage, and the process correctly running in the Object Studio OB Studio correct.
Thank you!
For the person who down-voted this item...Configs here are my two days of config changes and research that I tried on my own before asking for help. This is my 5th automation that I have put into deployment without help, so thank you for down-voting someone trying to get help where they are stuck.
Your problem is most likely connected with the fact that processes run in the control room are executed much faster than in studio and your webpage might not be loaded on time (that would explain why it works after retry).
Best practice approach would be to add a dynamic wait stage after attach and use “Parent Document Loaded” option on the element you want to interact with. It will wait for the page to be loaded and then check exist for the element. I would also suggest splitting your action into two, first to set start date and second for end date.

Scrapy not extracting data from a certain xpath

I'm trying to extract some data from an amazon product page.
What I'm looking for is getting the images from the products. For example:
https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
By using the XPath
//script[contains(., "ImageBlockATF")]/text()
I get the part of the source code that contains the urls, but 2 options pop up in the chrome XPath helper.
By trying things out with XPaths I ended up using this:
//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]
Which gives me exclusively the data I need.
The problem that I'm having is that, for certain products ( it can happen within 2 pairs of different shoes) sometimes I can extract the data and other times nothing comes out. I extract by doing:
imagenesString = response.xpath('//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]').extract()
If I use the chrome xpath helper, the data always appears with the xpath above but in the program itself sometimes it appears, sometimes not. I know sometimes the script that the console reads is different than the one that appears on the site but I'm struggling with this one, because sometimes it works, sometimes it does not. Any ideas on what could be going on?
I think I found your problem: Its a captcha.
Follow these steps to reproduce:
1. run scrapy shell
scrapy shell https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
2. view response like scrapy
view(respone)
When executing this I sometimes got a captcha.
Hope this points you in the right direction.
Cheers

Resources