Can DalekJS call or use a previous test (like a login test) and continue once that test has completed? I would like to write my test files as singular tests so that individual people are able to edit only a small portion of it.
I would like to test if a menu item actually links to a page, but call the test that checks if a user can login to the site as the menu item test requires that the user is logged in.
As DalekJS files are just "normal Node.js" files, you can basically do whatever you want ;)
I have some resources on how to keep your tests DRY & modular, go and check out this repository https://github.com/asciidisco/jsdays-workshop/tree/8-dry I made for a workshop. To be more specific, these files in particular:
https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/form2.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/configuration.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/formHelper.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/selectors.js
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I am doing BDD test on an app with cucumber, and I want to have clear instruction as it is recommanded in cucumber doc. The thing is that we have to do reusable step definitions so the maintenance cost is acceptable.
Example of scenario we have
Given I am on project page
When I click on 'buttonAddProject' //not easily readable
And I click on 'switchProjectPrivate'
And I click on 'buttonDeleteProject'
etc..
I don't want to have a function for each step like that: I change projet visibily or I delete project,
because this is basically just a click on a button, and we are going to have hundred of function like this. I also can't change the param in key to something more suitable, because every button key should be unique to avoid ambiguity.
So is there a way to do this with cucumber ?:
Given I am on project page
When I click on 'Add' //easily readable
And I click on 'Private'
And I click on 'Delete'
Bindings: //this keyword doesn't exist
'Add' : 'buttonAddProject'
'Private': 'switchProjectPrivate'
'Delete':'buttonDeleteProject'
I have tried that:
Scenario Outline:
Given I am on project page
When I click on <Add> //easily readable
And I click on <Private>
And I click on <Delete>
Examples:
|Add |Private |Delete |
|'buttonAddProject'|switchProjectPrivate'|'buttonDeleteProject'|
it works... but I need to do this for every scenario in the file, and if I really want to use scenario outline to iterate several times, I would have to copy paste this for every line, not really what I want.
How to organize this tests to make them more readable without making things to complex ?
First of all Cucumber scenarios that show HOW each thing is done are not maintainable or particularly useful.
What are cucumber scenario should describe and document is WHAT you are doing. To do this you need to determine WHY you are clicking on these buttons and what is achieved by these actions.
Now I have no idea from your scenarios about WHAT you are adding, WHY it is private or WHY you are then deleting it. But I can speculate from your post. The scenarios you should be writing should be something like.
Scenario: Delete a project
Given there is an existing project
And I am viewing the project
When I delete the project
Then ...
Scenario: Create a project
When I create a project
Then a project should be created
When you write your scenarios in this manner you push the details of how you interact with your UI down into your step definitions. So you might have something like
When 'I create a project' do
visit project_page
click "Create Project"
end
or better just
When 'I create a project' do
# must be on project page
click "Create Project"
When you work this way step definition re-use becomes less relevant and valuable. Each step does more and does something more specific.
You can continue this pattern of pushing the HOW down by having step definitions make calls to helper methods. This is particularly useful when dealing with Given's which get alot of re-use. Lets explore this with Given there is an existing project
Given 'there is an existing project' do
#project = create_project
end
Here we are pushing how we create an existing project down into the helper method create_project. The crude way to this would be to go through your UI visiting the project page and adding a new project. However this is really slow. You can optimise this process by bypassing your UI.
The most important point, whatever you decide to do, is that you are taking HOW you do something out of Cucumber and into some underlying code so now Cucumber is only interested in WHAT you are doing and WHY its important.
Making this change is probably the single most important thing you can do when Cuking. If you keep the HOW in your cucumber scenarios and step definitions you will end with a large number of brittle step definitions and very large scenarios that break all the time because everything is coupled together. You will get lots of bugs where making a change to get one step definition working causes lots of other scenarios to break. You will get lots of bugs where small changes to how you do a particular thing cause lots of unrelated scenarios to break.
Finally you are not doing BDD if you are writing the test after the code has been written. You can only do BDD if you write your scenarios collaboratively before the code is written.
Each step must be tied to a step definition. If you like to reuse an existing step def, you can just pass the command as argument (" Add", "Private","Delete"). You will have to use both the scenario name and the corresponding command to perform the required action.It will be something like this,
Scenario: scenario1_deleteproject
Given I am on project page
When I click on 'Add'
And I click on 'Private'
And I click on 'Delete'
Scenario: scenario2_createproject
Given I am on project page
When I click on 'Add'
And I click on 'Private'
And I click on 'Delete'
The step definition:
#When("When I click on {string}")
public void I_Click_On_Something(String command)
{
Switch(Command)
{
case Add:
//perform steps here
case delete:
//perform steps here
default:
}
If you want to differentiate the commands between the scenario, you will have to use scenario name ( need a class with definitions of scenario & command). You can grab the scenario name #Before hook.
I'm working on a pre-existing python code-by-zapier zap. The trigger is "Code By Zapier; Run Python". I've made some changes to the contained python script, and now when I go to test that step I run into the following error message:
We couldn’t find a run python
Create a new run python in your Code by Zapier account and test your trigger again.
Is there any way of figuring out what went wrong?
I'm guessing a little bit, but I think this issue stems from repeatedly testing an existing trigger without returning a new ID.
When you run a test (or click the "load more" button), then Zapier runs the trigger and looks through the array for any new items it hasn't seen before. It bases "newness" on whether it recognizes the id field in each returned object.
So if you're testing code that changed, but is returning objects with previously seen ids, then the editor will error saying that it can't find any new objects (the can't find new run pythons is a quirk of the way that text is generated; think of it as "can't find objects that we haven't seen before).
The best way to fix this depends on if you're returning an id and if you need it for something.
Your code can return a random id. This means every returned item will trigger a Zap every time, which may or may not be intended behavior.
You can probably copy your code, change the trigger app (to basically anything else), run a successful test (which will overwrite your old test data), and then change it back to Code by Zapier and paste your code. Then you should get a "fresh" test. Due to changes in the way sample data is stored, I'm not positive this works now
Duplicate the zap from the "My Zaps" page. The new one won't have any existing sample data, so you should be able to test normally.
I am trying to crawl multiple webpages using a single script.
At the moment I have individual scripts for each URL I want to visit. But I was wondering if its possible for
1) A single script to take you to multiple pages and do desired actions?
2)A script that asks user for URL address and then does standard actions and keeps prompting for more URLs till user wishes to finish.
Do some research into "data driven testing" and "parameterized tests" with Selenium. You'll be able to have a data source (CSV file, inline definition, database, whatever) to read from then do your standard actions. You could also prompt the user instead of having every site defined at the start in a datasource, but that would take away many of the benefits of having everything scripted.
A really basic Python implementation of data driven Selenium can be seen here.
I've got a feature (a .feature file) that are working fine in cucumber.
The background of all the scenarios in the feature just sets up a user, and then logs in as a supervisor, e.g.
Background:
Given I am logged in as a supervisor with an existing supervisee
...loads of scenarios
However the design/goals of the application has changed and the same scenarios should all work whether you are logged in as a supervisor or as the user. This is not true for most of the rest of the application where the design is not symmetrical for supervisors/users.
Is there any sane way to avoid copying and pasting the whole of the feature file with a different background? It doesn't seem like there's a way to either parameterize background (e.g. with an Either: Or: stanza) or alternatively a way to pull in an external file with a load of scenarios. Ideas?
Background:
Given I am logged in as an existing supervisee
...same loads of scenarios
Here's some fantasy gherkin syntax (that doesn't exist)
Background Outline:
Given I am logged in as a <user>
Backgrounds:
| user |
| supervisor with an existing supervisee |
| an existing supervisee |
...loads of scenarios
Alternatively different fantasy Gherkin syntax :
Background:
Given I am logged in as an existing supervisee
Include Scenarios:
supervisor.features
If it was me, I would just suck up the duplication:
http://dannorth.net/2008/06/30/let-your-examples-flow/
An alternative would be to use a tag on the feature that indicates you want to run the scenarios against both user groups. Then use an Around hook to run the scenario twice, once for each type of user.
We've talked about things like Background Outlines before, but the conclusion we came to was that it wouldn't be worth the extra complexity to implement it.