The Intern: Preferred method of accessing Capabilities of the current session? - intern

I'm writing an Intern Functional test suite, and I'd like to scan my environment for features in order to skip tests that aren't relevant to the environment. For example, I never want to run tests that involve touch interactions in browsers that are not touch capable.
My plan was to hook into Leadfoot's session object and pick up the capabilities property, but after some exploration in Node Inspector I've only been able to get it through this.remote.session, which has it hidden behind an underscore.
Is there a better way to get access to the current session's capabilities?

Related

What can be a feature in BDD

I have done some research over information related to below question, but couldn't get right information.
I have a scenario where a user creates some data using a create rest API and saves it in backend. Then, the user retrieves the saved data using a get API later to validate the data that's saved in the backend as part of create API.
Now, can creating the data in backend and retrieving the data be combined as a feature? or should there be two features – one for creating the data and other for retrieving the data? If it can be done in both ways – what are advantages of one over other?
There are no specific rule of thumb how one would group business logic by features. However there are some technical details making your entire code behave differently depending on how you group features. Here is some advice:
Background is defined once per feature. So if your tests require different background it probably makes sense to put it to different features (probably testing get would imply you have to insert some data before the test which is not necessary for testing put)
If you're not "gluing" your files explicitly they are taken depending on the position of your runner classes within the package structure. So that you can play with different configuration not only on gherkin level but also on the level of the particular test framework (like JUnit and TestNg). This is very much like the previous point but only using the capabilities provided by underlying unit test framework.
If you need to run your tests in parallel, sometimes the way how you group things by features also matters. When you run Cucumber-JUnit4 in parallel, it runs each feature file in parallel but all the scenarios inside a single feature in a sequence.
You would also might need to tag some tests in some special way. If there is a lot of such tests it makes sense to put sthem in a spearate file and apply the tag to entire feature rather than to tag each test individually.
I would suggest to have two separate scenarios to validate the POST and the GET. In that way, you have better visibility of two separate APIs. In the future during execution, you would also be able to know by looking at the title which API works and which one is broken (if any). You don't need to go into the step definitions and check whether the scenario for the POST API also includes the validation for GET as well or if that's a separate scenario.
So, one scenario to validate the POST and whether it returns 201 Created. And another scenario to validate the GET.

How to use Locust for UI performance testing?

I would like to use Locust for UI performance testing. How to use Locust for UI performance testing? How can I get the loading time of the HTML elements(img, lists, etc..)?
Thanks
Locust isn't a browser and doesn't parse HTML. It just does plain HTTP requests and it will not load things like images based on the response.
If you need something like that, you would need to parse the HTML in the response and do the "dependent" requests in your test script.
Locust is not made for that (as said). There are some other fancy tools which will allow to do it for you e.g:
k6.io (https://k6.io/ - previously known as LoadImpact) - allows you to perform performance checks outside of your environment and report it back to the pipeline with results. Easy to configure and integrate, great when it comes to more "clever" testing scenarios such as stress tests, load tests etc.
sitespeed.io (https://www.sitespeed.io/) - my 2nd favorite, very fun to use and easy to configure tool to track FE performance and tests (e.g. done with Selenium)
Lighthouse Reports - might be also performed as a "pointer" to the most common issues and included as a PR comments e.g. or notifications during the process (there are many Github Actions or DevOps packages doing it)
I've also gathered some of my findings in my recent talk (slides below) and is converted into the series of blogs around these topics and first of them is already published:
Slide deck from my talk on "Modern Web Performance Testing": https://slides.com/zajkowskimarcin/modern-web-performance-testing/
First blog from the series on the same topic: https://wearecogworks.com/blog/the-importance-of-modern-web-performance-testing-part-1

Mocking API responses with C# Selenium WebDriver

I am trying to figure out how (or even if) I can replace the API calls made by my app when running a Webdriver test to return stubbed output. The app uses a lot of components that are completely dependent on a time frame or 3rd party info that will not be consistent or reliable for testing. Currently I have no way to test these elements without using 'run this test if...' as an approach which is far from ideal.
My tests are written in C#.
I have found a Javascript library called xhr-mock which seems to kind of do what I want but I can't use that with my current testing solution.
The correct answer to this question may be 'that's not possible' which would be annoying but, after a whole day reading irrelevant articles on Google I fear that may be the outcome.
WebDriver tests are End to End, Black Box, User Interface tests.
If your page depends on an external gateway,
you will have a service and models to wrap that gateway for use throughout your system,
and you will likely already be referencing your models in your tests.
Given the gateway is time dependent, you should use the service consumed by your api layer in your tests as-well, and simply check that the information returned by the gateway at any time is displayed on the as page as you would expect it to be. You'll have unit tests to check the responses model correctly.
As you fear, the obligatory 'this may not be possible': Given the level of change your are subject to from your gateway, you may need to reduce your accuracy or introduce some form of refresh in your tests, as the two calls will arrive slightly apart.
You'll likely have a mock or stub api in order to develop the design, given the unpredictable gateway. It would then be up to you if you used the real or fake gateway for tests in any given environment. These tests shouldn't be run on production, so I would use a fake gateway for a ci-test environment and the real gateway for a manual-test environment, where BBT failures don't impact your release pipeline.

How stop Nashorn from allowing the quit() function?

I'm trying to add a scripting feature to our system where untrusted users can write simple scripts and have them execute on the server side. I'm trying to use Nashorn as the scripting engine.
Unfortunately, they added a few non-standard features to Nashorn:
https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html#sthref29
Scroll down to "Additional Nashorn Built-in Functions" and see the "quit()" function. Yup, if an untrusted user runs this code, the whole JVM shuts down.
This is strange, because Nashorn specifically anticipates running untrusted scripts. See: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/api.html#classfilter_introduction
Applications that embed Nashorn, in particular, server-side JavaScript
frameworks, often have to run scripts from untrusted sources and
therefore must limit access to Java APIs. These applications can
implement the ClassFilter interface to restrict Java class access to a
subset of Java classes.
Is there any way to prevent this behavior? How do I prevent users from running any of the additional functions?
Unfortunately, there is currently no way to control creation of non-standard global functions. One workaround is to simply delete these functions from the global object after the ScriptEngine has been initialized:
final NashornScriptEngineFactory engineManager = new NashornScriptEngineFactory();
final ScriptEngine engine = engineManager.getScriptEngine();
final Bindings bindings = engine.getBindings(ScriptContext.ENGINE_SCOPE);
bindings.remove("print");
bindings.remove("load");
bindings.remove("loadWithNewGlobal");
bindings.remove("exit");
bindings.remove("quit");
System.err.println(engine.eval("'quit is ' + typeof quit"));
If you are using the Nashorn shell, a simple delete quit; will do.
If you are using the ScriptEngine interface and create multiple bindings, you'll have to do this with every global object/binding you create.
If you're going to run "untrusted" scripts, please run your program with the SecurityManager turned on. With that "quit" would have resulted in SecurityException. ClassFilter by itself is not a replacement for SecurityManager. It is used to be used along with the SecurityManager. Please check the JEP on ClassFilter here: http://openjdk.java.net/jeps/202. The JEP clearly states this:
Make security managers redundant for scripts. Embedding applications should still turn on security management before evaluating scripts from untrusted sources. Class filtering alone will not provide a complete script "sandbox." Even if only untrusted scripts (with no additional Java classes) are executed, a security manager should still be utilized. Class filtering provides finer control beyond what a security manager provides. For example, a Nashorn-embedding application may prevent the spawning of threads from scripts or other resource-intensive operations that may be allowed by security manager.

Integration tests for single sign-on pages

How do you test pages with single sign-on (SSO) login during integration tests (for instance by using caybara or cucumber)? For a normal login, you would write a method which visits the login page, fills out the form, and submits it. This is a bit difficult if the login form comes from an external SSO server like Shibboleth or OpenAM/OpenSSO. How is it possible to write integration tests for pages protected by SSO?
A similar problem is integration testing with a separate search server (Solr or Sphinx). You would probably solve it by using some form of mocks or stubs. Can someone give a good example how to mock or stub a SSO for cucumber or capybara? If this is too difficult, then a comparable example for a search server would be helpful, too.
Integration testing of a SSO application is a
special case of a more general problem: testing
distributed applications. This is a difficult
problem and there does not seem to be a magic
bullet for it. There are various ways to combine
a set of different servers or services and test them
as a whole. The two extremes are
a) Test an instance of the whole system. You don't
need any mocks or stubs then, but you need
a complete, full-fledged setup of the entire stack. This includes
a running instance of every server involved.
For each test, setup the entire application stack,
and test the whole stack, i.e. test the
entire distributed system as a whole with all
the components involved, which is difficult
in general. This whole thing works only if each
components and all connections are working well.
b) Write an integration test for each component,
treat it as a black box, and cover the
missing connections by mocks and stubs.
In practice, this approach is more common for
unit testing, one writes tests for each
MVC layer: model, view, and controller
(view and controller often together).
In both cases, you have not considered
broken connections. In principle one
has to check for each external server/service
the following possibilities
is down
is up and behaves well
is up and and replies wrong
is up, but you send it wrong data
Basically, testing of distributed apps is difficult.
This is one reason why distributed applications are hard to develop.
The more parts and servers a distributed application has, the more difficult it is to setup many full-fledged environments like production, staging, test and development.
The larger the system, the more difficult the
integration testing becomes. In practice,
one uses the first approach and creates a small
but complete version of the whole application.
A typical simple setup would be App Server + DB Server + Search Server.
On your development machine, you would have
two different versions of a complete system:
One DB Server with multiple databases (development and test)
One Search Server with multiple indexes (development and test)
The common Ruby plugins for search servers (Thinking Sphinx for Sphinx
or Sunspot for Solr) support cucumber and integration
tests. They "turn on" the search server for certain portions of
your tests. For the code that does not use the search server,
they "stub" the server or mock out the connection to avoid unneeded
indexing.
For RSpec tests, it is possible
to stub out the authentication methods,
for example for a controller test by
before :each do
#current_user = Factory(:user)
controller.stub!(:current_user).and_return(#current_user)
controller.stub!(:logged_in?).and_return(:true)
end
It also works for helper and view tests, but
not for RSpec request or integration tests.
For cucumber tests, it is possible to stub
out the search server by replacing the connection
to the search server with a stub (for Sunspot and
Solr this can be done by replacing the Sunspot.session,
which encapsulates a connection to Solr).
This all sounds well, unfortunately it is a bit hard to
transfer this solution for a SSO Server. A typical minimal
setup would be App Server + DB Server + SSO Server.
A complete integration test would mean we have to setup one SSO Server with
multiple user data stores (development and test).
Setting up a SSO Server is already difficult enough,
setting up a SSO Server with multiple user data
stores is probably not a very good idea.
One possible solution to the problem is maybe somewhere in the
direction of Fakeweb. FakeWeb is a Ruby library written by
Blaine Cook for faking web requests. It lets you decouple
your test environment from live services. Faking the response
of a SSO server is unfortunately a bit hard.
Another possible solution I ended up using is to use a fake login, i.e.
add a fake login method that can be called within the integration
test. This fake login is a dynamic method only added during the
test (through a form of monkey patching). This is a bit messy, but
it seems to work.

Resources