How to create situational or job specific test program flows? - origen-sdk

I am wondering how folks create situational or test program specific test program flows based on silicon feedback data. I see that their are job based flows talked about in these videos:
http://origen-sdk.org/origen/videos/5-create-program-flow/
http://origen-sdk.org/origen/videos/6-create-program-tests/
How do folks use silicon test results to alter their flows without putting brittle condition based test exclusions (e.g. next if test == 'mytest')? I guess I would say there are at least this many jobs or scenarios:
debug (aka first silicon)
samples (can be multiple)
characterization (can be multiple)
ttr (can be multiple)
quality assurance (all tests or perhaps a specific quality flow like HTOL or HTOL time-zero)
Is there a way to pass in silicon based test names to prevent having to alter flows all of the time?
thx

This is what the if/unless_enable controls are for: http://origen-sdk.org/origen/guides/program/flowapi/#Execution_Based_on_the_Runtime_Environment
This creates what are called user flags (I think) on V93K, which are designed to be set by the "user" before the flow is executed and not really change state during execution. As opposed to flow flags which can be changed at runtime by tests during the flow execution.
if/unless_job is a similar user flag that is intended to indicate the insertion in the test flow (e.g. wafer test 1, wafer test 2, etc) and is inspired by the column/attribute of the same name on Teradyne platforms. On V93K it generates a regular user flag called #JOB.
The three different types of controls you have then are:
if/unless_job - Use to model the test insertion name, normally this naming would be something that you would want all of your test modules to agree on - you can't really have module specific values for this. e.g. WT1, WT2, FTR, FTH, etc.
if/unless_enable - Option switches to be set at the start of the flow to enable/disable different parts of the flow. These can either be very specific to a particular test module, or common to the whole flow, or a mixture of both. e.g. SAMPLES, TTR, SRAM_CZ etc.
if/unless_flag - To respond to flags which can be changed at runtime, normally depending on the result of a particular test(s).
Finally, the enables are usually set by either the test floor controller software, or they can be set within the flow itself, depending on the platform and local conventions.
If you want to enable/disable these flags within the flow itself then Origen provides the following API:
enable :samples
if_enable :samples do
test :test1 # Will be hit due to the samples flag being set
end
disable :samples
if_enable :samples do
test :test1 # Now it won't be
end

Related

TestCafe How to share runner in the global scope?

I'm using TestCafe(TC) and writing a test which implements multiple tests in a single TC test. This is for an investment reporting app.
Clients are offered a view of their portfolios, with assets grouped into various categories.
The app offers a "current month" view, with the ability to switch to previous month's data -- called AsOfDates. Within each monthly view, the data is organized into various periods; e.g., CYTD, FYTD, 1Year, 3Years... etc each of which offers a view of the portfolio over the respective time period.
There are numerous graphs throughout the app, with different display specs for the graph type (line, bar, ...): for example how many x-axis points there are for each period and how they are labelled.
I have a working TC regression test that: loops thru multiple clients; loops through the AsOfDates; loops through the available Periods; and examines the various graphs to ensure that the x-axis data is presented according to spec.
In the event of one or more failures I simply collect information documenting the failure and continue to the end of the test.
When the test completes, I create a success or failure report which we can use in our CI/CD pipeline. When done, I want to quietly close the TC task so that it doesn't also generate a test report.
To do that I've been told I need to share the TC runner in the global scope and use the global.runner.stop() stop method.
I'm curently using the TC/CLI approach:
testcafe chrome ... src/pages/regression/graphDataPoints.js
How can I grab the runner to do this or do I have to write my own script using testcafe.createRunner()?
There are two ways:
Create your own script using testcafe.createRunner() and pass options from CLI to runner.
Fork the reporter that you use, modify it in the way you want, and use it in your tests. In the reporter, you can add a condition when it has to show messages.

How to use a state chart as the flow chart for an agent

I have two processes I want to juxtapose. The first is a Manual workflow that is well represented by the Process library. The second is a software System that performs the same work, but is better modelled as a state transition system (e.g. s/w component level).
Now in AnyLogic, state models are for agents, that can run through processes with animations (counts), or move across space. What if I want to use a state chart to run an agent through? so I have a System state chart/agent and a Job state chart/agent?
I want Jobs from Population A to go through the Manual process flow chart and Jobs from Population B to go through the System state flow chart, so I can juxtapose the processing costs. I then calculate various delays and resource allocations for each of the Jobs going through and compare them.
Can anyone explain how to setup a state chart as the base process, another agent will go through? Is this even possible?
Please help
Thanks
This will not work as you would like it to, for these reasons:
You can't send an Agent into a flowchart. (Not sure how AnyLogic is handling it internally, maybe a generic token, or no flow at all, just changes to the state).
In AnyLogic there can only be one state active (simple or combined state) per state chart, so you can't represent a population with several members.
Agents can't be in more then one flow at a time, so even if it would be possible to insert an Agent into a statechart, this limitation would also apply.
The conclusion of this is: State charts are suitable for modeling individual behaviour (inside one Agent), whereas process flows can be both used for individual behaviour (inside one Agent, running a dummy Agent through) as well as for groups (multiple Agents running through process).
The normal use case would be to add the state chart to the Agent type running through your process flow (as you already noted in your question), applying the changes caused by the state chart to the individual agent.

Activity Diagram - confusion regarding fork/join and decision/merge in this scenario

I am creating an activity diagram
Admin log in to the web
If validated, it reaches the dashboard
Through dashboard it can manage account, manage product and manages issues
after performing one of above options, it can go back to dashboard or logout from system.
I have used fork/join, is it correct or I should be using decision/merge instead
2ndly, is the procedure of logging out or performing another option available in dashboard correctly defined?
Your activity is having several issues.
First and most severe, it will not do anything because actions (and mostly those - not all model elements in activities) having an "implicit and sematics" for incomming control flows. This means that an action is only executed when an token is offered on ALL incomming control flow actions, otherwise it waits. So as your control flow from validate can not offer an token before Login has been executed and finished, you are having a lock. And nothing is executed. The same applies to Dashboard. To solve this you need to model merge nodes.
The second point is that you only want to execute (according to your description) one of the manage actions. (Btw. names with generic verbs like "manage", "maintain", "do", "perform", etc. are quite bad names for actions, use more specific ones instead). Your model executes, irregardless of the selection in the dashboard action, all manage actions concurrently. Concurrently means in an arbitrary order and does not demand a parallel execution. Thus you should replace the fork with a decision node, where the conditions on the outgoing flows are based on the selection from the dashboard. An decision node can have an arbitrary (but finite) number of outgoing control flows. All the outgoing control flows from the manage actions are merged using a merge node instead of a join node. As the join node would wait for an incommingtoken per incomming control flow.
A minor point, that would be solved when using an UML/SysML tool is that the fork and join nodes are bars and not rectangular frames.
Your AD has 2 flaws. First a fork/join is a solid thick line, but not a hollow rectangle. Second, it's used wrongly. This way you run all Manage actions in parallel and continue when they are all finished. According to your description use a diamond to decide for one of the actions Also use the diamond afterwards to merge the flows and continue to Logout.

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

difference between passing control to different program using return() and calling a program using xctl

If I have ,say, 2 screens. First is the prompt screen which asks for, say, some record key and the next screen displays the information about the record.
Now when I want to transfer the control to the second screen (after doing the job of the 1st screen) I can do that by :
exec cics
return(trans-id)
commarea(ws-commarea)
end exec.
where trans-id is that of the 2nd screen.
Then what is need for using a calling function such as xctl when we already have the return() available in cics?
Using XCTL or LINK or dynamic CALLs confines your processing to one CICS transaction.
If you so desire, you can design your application to spread different business functions across multiple transactions, passing data with a commarea.
Historically this wasn't done for a number of reasons. Thirty years ago, some CICS Systems Programmers felt transaction IDs were a limited resource and encouraged application designers to keep processing to the minimum number of transactions possible.
Security in CICS is handled at the transaction level, so your user must have authority to execute all transactions that comprise the business function they must perform.
Resources such as temporary storage queues are often named in part using the transaction ID to differentiate and keep them separate.
Prior to CICS TS version 2 (I think) the data to be shared between those transactions was limited to the size of a commarea (32K). All supported versions of CICS now have channels and containers, allowing you to pass significantly larger amounts of data.
My experience is that it is simpler to code and easier to maintain pseudo-conversational transactions with screen interactions if the code is all in one transaction. You really want your transactions to be pseudo-conversational or non conversational. I believe this to be the overriding reason you see transactions designed to use XCTL, LINK, or dynamic CALLs.
XCTL also doesn't allow dynamic routing (you always stay in the same CICS region), and is one way only. Pseudo-conversational return as above will let the user update the screen, and then only when they press an Attention Identifier (such as Enter) will the next program run. XCTL will run immediately.

Resources