TestCafe How to share runner in the global scope? - e2e-testing

I'm using TestCafe(TC) and writing a test which implements multiple tests in a single TC test. This is for an investment reporting app.
Clients are offered a view of their portfolios, with assets grouped into various categories.
The app offers a "current month" view, with the ability to switch to previous month's data -- called AsOfDates. Within each monthly view, the data is organized into various periods; e.g., CYTD, FYTD, 1Year, 3Years... etc each of which offers a view of the portfolio over the respective time period.
There are numerous graphs throughout the app, with different display specs for the graph type (line, bar, ...): for example how many x-axis points there are for each period and how they are labelled.
I have a working TC regression test that: loops thru multiple clients; loops through the AsOfDates; loops through the available Periods; and examines the various graphs to ensure that the x-axis data is presented according to spec.
In the event of one or more failures I simply collect information documenting the failure and continue to the end of the test.
When the test completes, I create a success or failure report which we can use in our CI/CD pipeline. When done, I want to quietly close the TC task so that it doesn't also generate a test report.
To do that I've been told I need to share the TC runner in the global scope and use the global.runner.stop() stop method.
I'm curently using the TC/CLI approach:
testcafe chrome ... src/pages/regression/graphDataPoints.js
How can I grab the runner to do this or do I have to write my own script using testcafe.createRunner()?

There are two ways:
Create your own script using testcafe.createRunner() and pass options from CLI to runner.
Fork the reporter that you use, modify it in the way you want, and use it in your tests. In the reporter, you can add a condition when it has to show messages.

Related

Handling Multithreading in XML files for running testcases in parallel

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

In Prefect, can a task value be cached for the duration of the flow run?

I have a flow in which I use a .map(); as such, I "loop" over multiple inputs, however some of the inputs I need to generate only once, but I notice that my flow keep re-generating them.
Is it possible to cache/checkpoint the result of a task (which is used in other tasks) for the duration of the run?
My understanding is that it's possible to cache for a specific amount of time like so:
import datetime
from prefect import task
#task(cache_for=datetime.timedelta(hours=1))
def some_task():
...
However, if the run is less than the cache_for time, would the cache still hold for the next run (if not I guess a caching with a long time will work).
Yes, there are a few different ways to achieve this type of caching:
Use a different cache validator
In addition to configuring your cache expiration (as you've done above), you can also choose to configure a cache validator. In your case, you might use either an input or parameter validator.
Use a cache key
You can "share" a cache amongst tasks (both within a single Flow and across Flows) by specifying a cache_key on your tasks:
#task(cache_for=datetime.timedelta(hours=1), cache_key="my-key")
def some_task():
...
This will then look up your candidate Cached states by key instead of by task ID.
Use a file-based target
Lastly, and increasingly the more popular setup, is to use a file-based target for your task. You can then template this target string with things like flow_run_id and the inputs provided to your task. Whenever the task runs, it first checks for the existence of data at the specified target location, and if found, does not rerun. For example:
#task(target="{flow_run_id}/{scheduled_start_time:%Y-%d-%m}/results.bytes")
def some_task():
...
This template has the effect of re-using the data at the target if both of the following are true:
the task is rerun within the same day
the task is rerun as a part of the same flow run
You can then share this template across multiple tasks (or in your case, across all the mapped children).
Note that you can also provide inputs and parameters to your target template if you desire.

How to use a state chart as the flow chart for an agent

I have two processes I want to juxtapose. The first is a Manual workflow that is well represented by the Process library. The second is a software System that performs the same work, but is better modelled as a state transition system (e.g. s/w component level).
Now in AnyLogic, state models are for agents, that can run through processes with animations (counts), or move across space. What if I want to use a state chart to run an agent through? so I have a System state chart/agent and a Job state chart/agent?
I want Jobs from Population A to go through the Manual process flow chart and Jobs from Population B to go through the System state flow chart, so I can juxtapose the processing costs. I then calculate various delays and resource allocations for each of the Jobs going through and compare them.
Can anyone explain how to setup a state chart as the base process, another agent will go through? Is this even possible?
Please help
Thanks
This will not work as you would like it to, for these reasons:
You can't send an Agent into a flowchart. (Not sure how AnyLogic is handling it internally, maybe a generic token, or no flow at all, just changes to the state).
In AnyLogic there can only be one state active (simple or combined state) per state chart, so you can't represent a population with several members.
Agents can't be in more then one flow at a time, so even if it would be possible to insert an Agent into a statechart, this limitation would also apply.
The conclusion of this is: State charts are suitable for modeling individual behaviour (inside one Agent), whereas process flows can be both used for individual behaviour (inside one Agent, running a dummy Agent through) as well as for groups (multiple Agents running through process).
The normal use case would be to add the state chart to the Agent type running through your process flow (as you already noted in your question), applying the changes caused by the state chart to the individual agent.

How to create situational or job specific test program flows?

I am wondering how folks create situational or test program specific test program flows based on silicon feedback data. I see that their are job based flows talked about in these videos:
http://origen-sdk.org/origen/videos/5-create-program-flow/
http://origen-sdk.org/origen/videos/6-create-program-tests/
How do folks use silicon test results to alter their flows without putting brittle condition based test exclusions (e.g. next if test == 'mytest')? I guess I would say there are at least this many jobs or scenarios:
debug (aka first silicon)
samples (can be multiple)
characterization (can be multiple)
ttr (can be multiple)
quality assurance (all tests or perhaps a specific quality flow like HTOL or HTOL time-zero)
Is there a way to pass in silicon based test names to prevent having to alter flows all of the time?
thx
This is what the if/unless_enable controls are for: http://origen-sdk.org/origen/guides/program/flowapi/#Execution_Based_on_the_Runtime_Environment
This creates what are called user flags (I think) on V93K, which are designed to be set by the "user" before the flow is executed and not really change state during execution. As opposed to flow flags which can be changed at runtime by tests during the flow execution.
if/unless_job is a similar user flag that is intended to indicate the insertion in the test flow (e.g. wafer test 1, wafer test 2, etc) and is inspired by the column/attribute of the same name on Teradyne platforms. On V93K it generates a regular user flag called #JOB.
The three different types of controls you have then are:
if/unless_job - Use to model the test insertion name, normally this naming would be something that you would want all of your test modules to agree on - you can't really have module specific values for this. e.g. WT1, WT2, FTR, FTH, etc.
if/unless_enable - Option switches to be set at the start of the flow to enable/disable different parts of the flow. These can either be very specific to a particular test module, or common to the whole flow, or a mixture of both. e.g. SAMPLES, TTR, SRAM_CZ etc.
if/unless_flag - To respond to flags which can be changed at runtime, normally depending on the result of a particular test(s).
Finally, the enables are usually set by either the test floor controller software, or they can be set within the flow itself, depending on the platform and local conventions.
If you want to enable/disable these flags within the flow itself then Origen provides the following API:
enable :samples
if_enable :samples do
test :test1 # Will be hit due to the samples flag being set
end
disable :samples
if_enable :samples do
test :test1 # Now it won't be
end

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

Resources