So our QA guy came by today to get me to put id's on items in our html so he could automate stuff using watir.
I don't know much about it, so I tried to see if we could use the class names instead, but that's a total crapshow.
I was just wondering why something like
link(:item, :id => 'save-btn')
works when you set it up in watir, but you can't do something like
links(:item, :class => 'save-btn')[0]
I also tried using the browser.links calls, but we would consistently get
element not visible errors
I was just wondering why this was so difficult, to where using ids on everything seems to be the recommended way to go with everything? Is there a way to use class names with watir or is that just the way things are done?
CSS class attributes are quite normal way of accessing elements with Watir. As is id.
For class attributes, however you have to specify usually the context where you're searching from because there might be more than one element with the same class as opposed to id attribute, which has to be unique for the whole page.
I'm not sure what framework you're using in your examples, but this is how i would do it in plain old watir if the save button is in some container element:
browser.div(id: "container").span(class: "save-btn")
The code above will find first span with class "save-btn" in the container element as expected.
Also, do not use xpath or css locators ever as suggested by another answer in here. Why? Because they are really fragile and make your tests too hard to read/maintain.
Short answer: Adding class names to source code is the correct direction and should be considered as good practice, however, after that, using :class locator isn't good enough in most of the case, try use :xpath or :css. Therefore, you, as a developer, go ahead and add the class names, but you need make sure your QA people know how to use Watir, don't simply use :id or :class for all locators.
Long answer: If the site is simple enough, adding IDs would be easiest and the best one. However, nowadays, many JavaScript frameworks like ExtJS, create dynamic IDs, so in that case adding class names to source code would be better.
After adding the class names, for example in your case, using :class locator is a bad choice, which might be even worse than :id, as IDs are supposed to be unique. For complex pages, :class locator is pretty much useless, which will find unwanted elements.
Here, your error message means you might have more than one elements with class save-btn, the first one isn't visible to be interacted with.
Selenium WebDriver or Watir WebDriver, both support XPath and CssSelector, so you should use :xpath or :css when necessary, instead of :id, :class etc.
For example, something like:
links(:item, :css=> '.save-btn:not([style*='display:none'])')[0]
Jarmo Pertman suggests using ID/Class is in favor of XPath/Css Selectors, which is not completely correct. ID/Class are just subsets of XPath/Css Selectors, if it's easy enough, use ID/Class, however unnecessary chaining is not a good practice, in the example he gave,
browser.div(id: "container").span(class: "save-btn")
is equivalent to CSS selector div#container span.save-btn, therefore, there are no fragile or maintenance issues for CSS selector, because they are identical.
XPath/CSS selectors are powerful tools, everyone should be learning, but only use them if really needed. Bad XPath/Css Selector are fragile, try find the good ones. (But bear in mind that XPath is slow, should be considered as last option)
Related
So I understand how to pull data from a single weblink looking at tables. I cannot find not 1 tutorial anywhere on the web about how to do so getting it from Div elements and no one talks about it at all. Can someone please give me an example or something? Either Excel or Google Spreadsheets.
Im trying to teach myself doing so but using this website https://newworldstatus.com/regions/us-east for a small project I want to do.
Thank you in advance.
This is not a comprehensive answer, just intended to show you how some very basic concepts work. Second, an answer for Sheets, but let me preface all of this by saying that while your test URL seems simple enough, you will not be able to do any of this for that specific URL. They are either actively trying to stop scraping or they just have it set up in a way that makes it difficult to scrape by accident. If you directly make a web request to that URL, you will get back the JS code that actually handles the data load-in and not the data itself, so any kind of parsing you try to do will fail because what you see in the page isn't what is actually coming back on the initial page request. All the html that will be in the page is enough to show this:
You would need to either try to read through the code and figure out what they're doing, or do some tinkering in the javascript console, and probably some fairly high-level tinkering. So for a first project, or just to learn some basics, I think I would pick a different test case.
First, in VBA. It's both complicated and not all that complicated at the same time. If you know how web technologies work non-language specifically, then it all works pretty much the same way in VBA. First, you'll need to make a web request. You can do that with the winHTTP library or the msXML library. I usually use winHTTP, but unless what you're doing is complex, either one is fine.
WEB REQUEST:
You'll need to instantiate a request object. You can do that by either adding a reference to the library (tools->references-> and pick the library out of the list) or you can use late binding. I prefer to add the reference, because you get intellisense that way. Here are both:
Dim req As New WinHttp.WinHttpRequest
or
Set req = CreateObject("WinHttp.WinHttpRequest.5.1")
Then you open the request. I'm going to assume this is a straight GET. POST requests get a little more complicated:
req.Open "GET", url, TRUE
If you have the reference added and created the req with Dim, then you'll get the intellisense and as you type that the arguments will pop up and you can use that to refer to the documentation if you have questions. TRUE here is to send it asynchronously, which I would do. If you don't, it will block up the interface. This the Open method, which you can find in the documentation.
https://learn.microsoft.com/en-us/windows/win32/winhttp/iwinhttprequest-interface
Then use
req.send
req.WaitForResponse
source = req.responseText
to send the request. WaitForResponse is needed only if you send the request asynchronously. The last part is to get the responseText into a variable.
PARSING:
Then you'll need to do some stuff with the MSHTML library, so add a reference to that. You can also late bind, but I would not, because it will be very helpful to you to have the prompts in intellisense.
First, set up a document
https://learn.microsoft.com/en-us/dotnet/api/mshtml.htmldocument?view=powershellsdk-1.1.0
and write the source you just fetched to it:
Dim doc as new MSHTML.HTMLdocument
doc.write source
Now you have a document object you can manipulate. The trick is to get a reference to the element you want. There are two methods that will return an element:
getElementById
querySelector
If you are lucky, the element you are looking for will have a unique ID and you can just get it. If not so lucky, you can use a selector that identifies it uniquely. In either case, you will set up an IHTMLElement to return to:
Dim el as MSHTML.IHTMLElement
set el = doc.getElementById("uniqueID") 'whatever the unique ID is
Once you have that, you can use the methods and properties of the element to return information about it:
https://learn.microsoft.com/en-us/dotnet/api/mshtml.ihtmlelement?view=powershellsdk-1.1.0
There are more specific interfaces, like
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAnchorElement
You can use the generic IHTMLElement, but sometimes there are advantages to using a specific element type, for instance, the properties that are available to it.
Sometimes you will have to set up an IHTMLElementCollection:
https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa703928(v=vs.85)
and iterate it to find the specific element you are looking for. There are four methods that return collections:
getElementsByName
getElementsByTagName
getElementsByClassName
querySelectorAll
getElementsByClassName is sometimes problematic, so forewarned is forearmed.
If you need to do that, set up and IHTMLElementCollection and return the list to that:
dim els as MSHTML.IHTMLElementCollection
set els = doc.getElementsByTagName("tagName") 'for instance a for anchors, div for divs
That is about it. There is obviously more to it, but a comprehensive answer would be very long. This is mostly intended to point you in the right direction and give you more stuff to google.
I will say that you should test out some of these methods in the browser first. They exist in many languages, and all major browsers have developer tools. For Chrome, for instance, press Ctrl+Shift+I to bring up the dev tools, and then in the console window type something like:
document.getElementById("uniqueID")
and you should get the node. or
document.getElementsByClassName(".test") 'where test is the name of the class
document.querySelectorAll("div") ' where you pass a valid CSS selector
and you will get the node list.
It will be quicker to experiment there than to try to set it up and debug in VBA. once you have a good handle on how it works, try to transfer that knowledge to a VBA implementation.
Here is a basic overview of .querySelector to get you started on understanding how those work, although they can get very complicated. In fact, querySelector is my go to method for finding elements.
https://www.w3schools.com/jsref/met_document_queryselector.asp
Now, Google Sheets:
You don't really want to use IMPORTHTML, even though it seems counterintuitive. That function (AFAIK) only supports tables and lists, and it's index based, too, which means you give it a number n and it returns the nth table or list in the page. That means that if they ever change the layout, or the layouts are dynamic in any way, then you won't be able to rely and an index to accurately identify what you want. Also, as you noted people don't really use tables much anymore, and when they say list I'm pretty sure they mean on and elements, which is also not going to be that useful to you. Here's the docs:
https://support.google.com/docs/table/25273?hl=en&visit_id=637732757707317357-1855795725&rd=2
But you can use IMPORTXML. Even though it says XML, you can still use it to parse HTML (for reasons and with limitations that are out of scope for this answer). IMPORTXML takes a URL and an xpath selector. In this way it's similar to the document.querySelector and querySelectorAll methods. Here is some information on xpath in tutorial from from w3schools.
https://www.w3schools.com/xml/xpath_intro.asp
And if you want to test selectors in Chrome you can use $x("selector") in the javascript console in the dev tools. I believe Firefox also supports this, but I am not sure if other browsers do. If not, you can use document.evaluate:
https://developer.mozilla.org/en-US/docs/Web/API/Document/evaluate
Even though you can't actually use this in sheets against the URL you've given, let's take a look at a couple of xpath selectors in that context. Hit Ctrl+Shift+I to bring up the dev tools (hopefully you are using Chrome), and then go to the elements tab. If you don't have the javascript console showing in the bottom pane, hit Esc. You should see something like this:
Use the arrow icon in the top left of the dev tools to search the elements, and just click on the first row in the table:
so that you can see the structure of the elements, and figure out how to parse out what you want from it. You'll notice that the cell that's highlighted is contained in a div with a role of "row" and an attribute of row-id. I think that's where I would start. So an xpath to that container would look something like this:
//div[#row-id=1]
where we are fetching all elements (//) that match div and have an attribute (#) of row-id = 1.
If you want to get the children of that container, you just add another level to the path
//div[#row-id=1]/div
where we want to get all children (/) that are divs.
And I notice that they all have a col-id attribute, so if you wanted to fetch the "set" information you'd just specify divs that have an attribute of col-id = 'set':
//div[#row-id=1]/div[#col-id='set']
and to get the text out of that:
//div[#row-id=1]/div[#col-id='set']/text()[1]
since it looks like the second node is the one that has the team name in it. Again, you can see how this WOULD work in the dev tools, but you won't actually be able to use this for your URL.
I'm not going to spend a lot of time here. As already stated, you won't be able to use this method on your specific URL. If you can figure out the actual URL that your URL wraps around, then perhaps. Also, since there's only one argument, the selector, then there's not much more to expound on. If you needed something more complex, like the ability to iterate over a set of matching nodes, you could probably do it in Scripts, but I would probably just switch to Excel if it started getting that complicated. The only exception would be if the data was JSON formatted, in which case Scripts will be able to handle that better than VBA, although I would probably switch to a different language entirely in that case.
Since your URL is probably not good for testing, I'm going to point you to this tutorial from Geckoboard, which has a few different examples from sites like Wikipedia and Pinterest.
https://www.geckoboard.com/blog/use-google-sheets-importxml-function-to-display-data/
So google around, experiment, and let me know if you need any help. And this was all off the top of my head, so let me know if any of this stuff throws errors so I can edit the answer.
Also, be aware that Excel is not always the right tool for dealing with this. Very often, while the page might have the elements you are looking for, they will be loaded in with JSON and both php and javascript can natively handle JSON objects, while VBA doesn't. If the data is JSON formatted, it is much easier to parse it out of that than trying to parse it out of the DOM structure (DOM = document object model, another thing to google). Also, in many cases, if the data is loaded in with AJAX, it won't be returned with your winHTTP call, because that doesn't execute any javascript that might be in the page.
Further, in many cases you will need to set headers or cookies in the winHTTP call to get the data (calls without the right setings might return an error or a redirect). That is also not addressed in my answer, although you can set headers and cookies in winHTTP. You would need to sniff the calls, either with Fiddler or similar or with the network tab in dev tools, to find out the right combination of information to pass with your request.
I am using :xpath attribute frequently to identify an element for my automation scripts using Watir and found it really amazing. It is least changing attribute so less work to maintain automated scripts.. off course for those elements which can't be identified otherwise easily through :id, :name, :value attributes..
I am bit concerned to take some expert advise before building so many automated scripts using :xpath.
What is disadvantage of using :xpath to identify an object using Watir?
Do :xpath value of an element will be same in IE, Chrome and FF?when
Is there anything else important i should be aware about using :xpath?
Thanks
The xpath should always be the same in all browsers.
The problem with using xpath is that it is the easiest locator to break, as the locator for the element is dependant on nothing else in that xpath changing. e.g. if you are locating a results table on a page using an xpath and at a later date another table gets added above the table, then the xpath will be broken and your tests will fail until you update the xpath. If that table was located using an id then adding the second table wouldn't break anything as the new table would have a different id.
If the pages you're working on don't have id's and it isn't an option to add some/ ask for some to be added then remember that in watir you can use multiple locators.
e.g. #browser.table(class: 'results_table', text: /Original results table/)
This is a silly example but hopefully it illustrates the point. If there are cases when using multiple locators still won't work for any reason, then I would look into using css selectors instead of xpath as you should be able to achieve the same things but it will be less brittle.
The issue of how often tests break isn't too important in a small test suite, especially if you're the only one working on the tests. However, a couple of years from now when you have hundreds of tests to maintain and two or three people sharing the codebase you can end up spending longer fixing old tests that you spend writing the new ones. It's worth doing anything you reasonably can to minimise this as you go along as doing a rewrite later will always take longer.
Hopefully some of this helps!
I'm relatively new to Expression Engine, and as I'm learning it I am seeing some stuff missing that WordPress has had for a while. A big one for me is shortcodes, since I will use these to allow CMS users to place more complex content in place with their other content.
I'm not seeing any real equivalent to this in EE, apart from a forthcoming plugin that's in private beta.
As an initial test I'm attempting to fake shortcodes by using delimited strings (e.g. #foo#) in the content field, then using a regex to pull those out and pass them to a function that can retrieve the content out of EE's database.
This brings me to a second question, which is that in looking at EE's API docs, there doesn't appear to be a simple means of retrieving the channel entries programmatically (thinking of something akin to WP's built-in get_posts function).
So my questions are:
a) Can this be done?
b) If so, is my method of approaching it reasonable? Or is there something stupidly obvious I'm missing in my approach?
To reiterate, my main objective here is to have some means of allowing people managing content to drop a code in place in their content that will be replaced with channel content.
Thanks for any advice or help you can give me.
Here's a simple example of the functionality you're looking for.
1) Start by installing Low Replace.
2) Create two Global Variables called gv_hello and gv_goodbye with the values "Hello" and "Goodbye" respectively.
3) Put this text into the body of an entry:
[say_hello]
Nice to see you.
[say_goodbye]
4) Put this into your template, wrapping the Low Replace tag around your body field.
{exp:low_replace
find="[say_hello]|[say_goodbye]"
replace="{gv_hello}|{gv_goodbye}"
multiple="yes"
}
{body}
{/exp:low_replace}
5) It should output this into your browser:
Hello
Nice to see you.
Goodbye
Obviously, this is a really simple example. You can put full blown HTML into your global variable. For example, we've used that to render a complex, interactive graphic that isn't editable but can be easily dropped into a page by any editor.
Unfortunately, due to parse order issues, EE tags won't work inside Global Variables. If you need EE tags in your short code output, you'll need to use Low Variables addon instead of Global Variables.
Continued from the comment:
Do you have examples of the kind of shortcodes you want to support/include? Because i have doubts if controlling the page-layout from a text-field or wysiwyg-field is the way to go.
If you want editors to be able to adjust layout or show/hide extra parts on the page, giving them access to some extra fields in the channel, is (imo) much more manageable and future-proof. For instance some selectfields, a relationship (or playa) field, or a matrix, to let them choose which parts to include/exclude on a page, or which entry from another channel to pull content from.
As said in the comment: i totally understand if you want to replace some #foo# tags with images or data from another field (see other answers: nsm-transplant, low_replace). But, giving an editor access to shortcodes and picking them out, is like writing a template-engine to generate ee-template code for the ee-template-engine.
Using some custom fields to let editors pick and choose parts to embed is, i think, much more manageable.
That being said, you could make a plugin to parse the shortcodes from a textareas content, and then program a lot, to fetch data from other modules you want to support. For channel entries you could build out of the channel data library by objectiveHTML. https://github.com/objectivehtml/Channel-Data
I hear you, I too miss shortcodes from WP -- though the reason they work so easily there is the ubiquity of the_content(). With the great flexibility of EE comes fewer blanket solutions.
I'd suggest looking at NSM Transplant. It should fit the bill for you.
There is also a plugin called Shortcode, which you can find here at
Devot-ee
A quote from the page:
Shortcode aims to allow for more dynamic use of content by authors and
editors, allowing for injection of reusable bits of content or even
whole pieces of functionality into any field in EE
I want to add text to body element but I don't know how. Which method will work on the body tag?
Sorry for my english and thanks for replies.
In Watir, you can manipulate a web page (DOM) using JS, just like that:
browser.execute_script("document.getElementById('pageContent').appendChild(document.createTextNode('Great Success!'));")
I assume that the point of the question is:
All users are not just interacting by just clicking buttons and links on the web app, some of them are doing nasty things like altering http requests to make your system do something that it is not supposed to do... or to just have some fun.
To mimic this behavior, you could write a ui-test that alters forms on the web page, so that for example, one could type in anything into any field instead of a limited dropdown.
To do that, ui test has to:
manipulate DOM to set form inputs free of limitations (replace select's with input's, etc.)
ui test has to know, which values to use, in many cases it's pointless to enter random values. Your webapp has to provide some good "unwanted" options.
Why would you want to modify the webpage in Watir? It's for automated testing, not DOM manipulation.
If you want to add something to the DOM element in javascript, you can do it like that:
var txt = document.createTextNode(" This text was added to the DIV.");
document.getElementById('myDiv').appendChild(txt);
Or use some DOM manipulation library, like jQuery.
If you have not worked your way though the watir tutorial, I would suggest you do so. It deals with things like filling in text fields etc.
Learn to use the developer tools for your browser, Firebug for Firefox, or the built in tools for IE and CHrome. They will let you look at things as you interact with the site.
If the element is not a normal HTML input field of some sort, then you are dealing with a custom control. Many exist and they are varied and there is no one set solution for dealing with them. Without knowing which control you are using, and being able ourselves to interact with a sample of it, or at least see the HTML, it is very very difficult to advise you, we basically have to just guess (which is often a waste of everyone's time)
Odds are if you have a place you can enter text, then it is some form of input control, it might not start out that way, you may need to click on some other element, to make the input area appear, but without a sample of HTML all we can do is guess.
If this is a commercial control, see if you can find a demo site that shows the control in action. Try googling things like class names for the elements and often you get lucky
I've been rewriting my (fairly simple) website using Yesod as a way to get familiar with the framework. Part of that involves serving some simple static (but formatted) content. To do that I decided to use the nicHtml field that is described in the Yesod book:
http://www.yesodweb.com/book/forms
It allows simple formatting and, as the book says, "thanks to the xss-sanitize package, all user input is validated and ensured to not have XSS attacks."
However, all is not well. Some formatting seems to work when you enter it into the field, but gets wiped out somewhere between entry and submission. In particular, the form uses css embedded in 'style' attributes to do things like center text, and it is these css based formatting elements that seem to get wiped out.
I used print statements to check that it wasn't my code which was somehow messing it up. Since it doesn't seem to be, I assume that xss-sanitize doesn't like any embedded css and removes it. Modifying Yesod.Form.Nic to remove the call to sanitizeBalance appears to fix the problem, so that would seem to be the cause.
Now, I can just leave it like that, since editing these static pages requires being a trusted user anyway (i.e. me at the moment), so I don't care too much about validating out nastiness. But it feels like what it is, a hack, so my question is - is there any other way around this? Or is there another package I don't know about that provides a non-broken HTML editor field for Yesod?
Will you file a bug on the Yesod issue tracker for this? I think we are going to have to allow basic css through the editor no matter which editor we use. In your case of a trusted user, right now you could find the NicEdit field type and create a similar type that won't get filtered at all. Perhaps we should create such a field.
We're actually looking at other possible rich text editors right now for use in the Yesod website, so most likely whatever we use there will end up with a module in yesod-form. Most recently Greg pointed out Aloha editor which on first glance looks pretty cool.