What is the difference between SMT test suite description and comment? - origen-sdk

I see the SMT test suite has a 'comment' attribute and also see that atp gem generally supports a test level description. Which should I use to add a line in the rendered SMT flow file? The following does not work when called from here:
func :my_func, comment: "make a comment"
thx

A description is intended to be a meta-documentation of the test, meaning it is potentially long and not rendered to the generated program. Instead it would be used in documentation, that is outlined here - https://origen-sdk.org/origen/guides/program/doc/
This comment attribute directly maps to the comment field in the test program, it does not by default contain the description/documentation though you could probably set it up like that.
When creating a test suite the comment should be picked up. If you are not seeing it in the generated flow, then it probably means that your interface layer is not forwarding that comment option to the test suite creation.
So, either pass the options along when you are calling test_suites.add, or else set it in the test suite object like this: my_test_suite.comment = options[:comment].

Related

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

Convention for passing arguments to non-Silicon subblocks/helpers

Sorry if the title is a bit confusing, but what are the options/conventions that Origen provides for setting up subblocks that aren't necessarily silicon models, or are just general helpers?
For example, I have a scan helper plugin that guides the user through creating a scan test program. I'd like to add a list of options/customizations to the top-level app. There are a few ways to do this:
I can add a list of attr_readers/methods. I think this looks a bit ugly though and adds a bunch of stuff to the toplevel that isn't used by anything else, and it blows up $dut.methods.
I could use parameters as defined here: http://origen-sdk.org/origen/guides/models/parameters/ and just call of them in the scan tester app. But looking at the guides I don't think that is the desired use case. It looks more like context switching, but maybe that was just the example use case.
I could add a scan_tester.setup method or something on the toplevel. This just seems unnecessary though since its basically doing the same thing as #2, but requires a 'setup' method to be called. Yeah, its only 1 line, but if you mess up or forget to add that line then you've got some debug to do avoided by #2 (I can print a warning for example if the scan parameters aren't provided to help warn of typos, etc.).
I can set it up as a subblock (currently how I've got it), but this doesn't really fit. Scan isn't a silicon model, so base address is useless, but required. It has no registers, etc.
Then there's other 'Ruby' things I could do (setup via on_create, use global variable etc.) but these all seem not as great as any of the options above for one reason or another (mainly, more setup required on my part than using any of the existing options).
Any one of these would work. But from a convention standpoint, which direction should my scan tester setup go? Is there another option I hadn't considered? I'd lean towards option #2 as it looks the cleanest.
Thanks
This is a really good question.
There are actually two other options:
Add application config parameters from the plugin: http://origen-sdk.org/origen/release_notes/#v0_7_24
Define a constant as used by the JTAG and other early plugins: http://origen-sdk.org/jtag/#How_To_Use
I think #2 is using parameters in a way that was not originally intended, maybe it could work though but I just can't picture it.
I don't really like #5 or #6 since they provide application-level and class-level configuration, which is sometimes what you want, but often these days I see the need more for (DUT) instance-level configuration.
So, my best answer here is that I don't know, but you are touching on a good point that we need to have an official API or at least a recommendation for this.
I think you should be open to the possibility of adding something new to Origen for this if you can think of something better.
As I'm writing this, I suppose #5 would also support instance-level configuration, albeit a bit long-winded:
def initialize(options = {})
Origen.app.config.scan_chain_length = 6
end
My comment wouldn't keep its format, so here it is but looks better:
#Ginty
What would you think of a 'component' API. For example, we could have:
# components.rb
component(:scan, TIPScan::ScanTester,
# options
wgl_dir: ..., # defaults to Origen.app.root/pattern/wgl
custom_sort: proc do {|wgl_name| ...},
)
# then we can do things like:
$dut.scan #=> TIPScan instance
$dut.component(:scan) #=> same as above
$dut.components #=> [TIPScan instance, ...]
$dut.has_component(:scan) #=> true etc.
Pretty much just a stripped down subblock class to handle these. I think our IAR/C compilers and even CATI could benefit from this and make the setup cleaner and more customizable.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Can I use Groovy scripts in SoapUI to enable/disable assertions?

I'm using SoapUI Pro and a DataSource/DataSink loop to test a web service.
To make life more fun, I need to pull from four distinct source files, all of which will cause different expected results.
I'd really like to do this in a single test loop, because having scripts with multiple loops tends to crash SoapUI more often than not, but the sticking point is assertions.
How can I enable or disable assertions in a Groovy script in SoapUI? GetData doesn't give me anything to hook onto, and a documentation dive did not reveal the proper syntax. I'd assume something like testCase.assertion, but there's no such property as "assertion" on testCase.
Alternately, can I use a Groovy script to change the assertion's content? In other words, if I want phrase X with file 1, phrase Y with file 2, I'm just as happy using the same assertion, so long as I can change the content it's trying to match.
You could use your Groovy script to set some kind of property testCase.setPropertyValue('expected', 'value'), based on which file you are reading. You could then use property expansion ${testCase#expected#} in the assertion content.

Not complete test suite with Spec Explorer 2010

I’m trying out Spec Explorer, and now I have this bug that my test suite is incomplete. I don't get an error or anything, it’s just that I would expect 16 test cases and I only have 11 of them.
The problem I have is with the sample project that is in Spec Explorer 2010. Because I’m new at this I was trying different stuff out with the sample project, so stuff like expanding the range and expanding the double add to quadruple add. This last one was where I noticed that I was missing some test cases. I changed it back to triple add, to watch if the problem was there to. And as I expected I missed a test case again. Only with the triple I expected 8 test cases and I only got 7.
The only thing I changed in the code:
machine DoubleAddScenario() : Main where ForExploration = true
{
(Add(_); Add; Add; ReadAndReset)*
}
I’ve also tried to do this
(Add(_); Add(_); Add(_); ReadAndReset)*
But same problem there. The test case I’m missing is the Add(1); Add(2); Add(1). I’ve also tried calling only this one, and that works, so why am I missing it in my test suite?
Am I doing something wrong, or does Spec Explorer filter something for me? And if it is Spec Explorer where does it make this decision?
good question. The reason why the test case is missing is, that Spec Explorer uses step (transition) coverage and not full path coverage as coverage criterion. So you will find a test case which uses in first step "Add(1)" another one which uses in the second step "Add(2)" and finally a test case which uses "Add(1)" in step 3 but not necessarily one single test case with the exact combination. You find the answers (as really a lot of question were asked there) in the forum and help of Spec Explorer:
http://msdn.microsoft.com/en-us/library/ee620427.aspx
http://social.msdn.microsoft.com/Forums/en-US/977b90c1-8938-474a-840e-14fd78b1af3e/spec-explorer-wmethod?forum=specexplorer
Spec Explorer is used in real world testing so the problem (only one of the many in MBT) with exponential explosion for path coverage had to be worked around. The extremely cool solution of Spec Explorer is the Cord-language (or regular language if you want). Instead of tedious programming test cases Spec Explorer allows us now to only sketch the test case with scenarios. The details and combinations comes out of the generic model. In practice this is what we (at least all the projects I did) really want. And as you see you can add your missing test case if you really need it.

Resources