SystemVerilog: introspection for functional coverage - introspection

Unfortunately, SystemVerilog lacks of a comprehensive way for introspection. The only way that I know is leveraging VPI in order to obtain information about objects. However, it does not seem to include objects for functional coverage (covergroups, coverpoints, bins).
What I would like to accomplish is to:
1) obtain the coverage status of certain bins (get_coverage() is only able to return accumulated coverage information of single coverpoints and cross coverage points).
2) obtain information about the content of coverpoints regarding their bins or with other words all bin definitions of a coverpoint.
Up to now, I could not manage to let VPI do the job for me. Hence, I am wondering if anyone already did experiments in this field and has some solutions to my problem.
In case VPI does not offer such an introspection feature, I would also be happy about vendor-specific ways to analyze coverpoints and their bins.

If your vendor supports the Unified Coverage Interoperability Standard (UCIS) you could probably get away with implementing some C code that can read a coverage database and query it for what you need.
You could then stitch this C code to your SV simulator using the DPI/VPI.

There is no need to do this kind of post-processing work from within the SystemVerilog language. Most tools will give you reports that have all this information and can be filtered. The UCIS API is also available but would be overkill.

Related

Code Coverage Fuzzing Microsoft Office using WinAfl

I am looking for the ways to fuzz Microsoft office, let's say Winword.exe. I want to know which modules or functions does parsing the file formats like RTF,.DOCX,.DOC etc....
Yes i know by doing reverse engineering. but office don't have symbols(public symbols) which gives too much pain and too hard for tracing or investigating .
i have done some RE activity via windbg by putting breakpoint and analysing each function and done some stack analysis.so looked into RTF specification and relying on some structure will be loaded into memory while debugging in Windbg. but lost everywhere..... And time consuming.
Even i ran Dynamorio, hoping for getting some results. but again failed....
Winafl Compatibility:
As per winafl, i need to find a function which is taking some inputs and doing some interesting stuffs like parsing in my case.
but in my case it is way too difficult to get due to lack of symbols...
and i m asking, is it possible to doing code coverage and instrumentation fuzzing via winafl...
And what are the best possible and easy way to do RE activity on symbol less software like in my case?
so asking if anybody has any experience.....

Pragmatically export Alloy Instances to a file

I have an Alloy model. The model is of some decision making logic in software I wrote. In that model I have a few predicates that create examples. The predicate creates instances that are expected behavior and outside expected behavior. I would love to take those examples as inputs to a unit test for my code.
Does anyone have an example of some software that interacts with Alloy to dump many examples generated to a single file? I would love to run a program, get a file with many instances in it, and then use that file as input to my test program.
This interests me because the examples and counter examples created are often not what I would think to do when hand writing my test inputs.
Thoughts?
You can export an instance in the File/Export To menu.
If you can work in Java, then maybe it is interesting to know we're setting up an open source repo on github: https://github.com/AlloyTools/
I think it is quite easy to link your code with this code and generate your test cases or provide them from proper files.
I am extremely interested in this kind of applications for Alloy so please keep us posted on https://groups.google.com/forum/#!forum/alloytools

What is the value add of BDD?

I am now working on a project where we are using cucumber-jvm to drive acceptance tests.
On previous projects I would create internal DSLs in groovy or scala to drive acceptance tests. These DSLs would be fairly simple to use such that even a non-techie would be able to write tests with a little bit of guidance.
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
The extra layer is the plain language .feature file and at the point of creation it has nothing to do with testing, it has to do with creating the requirements of the system using a technique called specification by example to create well defined stories. When written properly in the business language, specification by example are very powerful at creating a shared understanding. This exercise alone can both reduce the amount of rework and can find defects before development starts. This exercise is otherwise known as deliberate discovery.
Once you have a shared understanding and agreement on the specifications, you enter development and make those specifications executable. Here is where you would use ATDD. So BDD and ATDD are not comparable, they are complimentary. As part of ATDD, you drive the development of the system using the behaviour that has been defined by way of example in the story. the nice thing you have as a developer is a formal format that contains preconditions, events, and postconditions that you can automate.
Here on, the automated running of the executable specifications on a CI system will reduce regression and provide you with all the benefits you get from any other automated testing technique.
These really interesting thing is that the executable specification files are long-lived and evolve over time and as you add/change behaviour to your system. Unlike most Agile methodologies where user stories are throw-away after they have been developed, here you have a living documentation of your system, that is also the specifications, that is also the automated test.
Let's now run through a healthy BDD-enabled delivery process (this is not the only way, but it is the way we like to work):
Deliberate Discovery session.
Output = agreed specifications delta
ATDD to drive development
Output = actualizing code, automated tests
Continuous Integration
Output = report with screenshots is browsable documentation of the system
Automated Deployment
Output = working software being consumed
Measure & Learn
Output = New ideas and feedback to feed the next deliberate discover session
So BDD can really help you in the missing piece of most delivery systems, the specifications part. This is typically undisciplined and freeform, and is left up to a few individuals to hold together. This is how BDD is an Agile methodology and not just a testing technique.
With that in mind, let me address some of your other questions.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
If you make the stepDefs a super thin layer on top of your automation testing codebase, then it's easy to reuse the automation code from multiple steps. In the test codebase, you should utilize techniques and principles such as the testing pyramid and the shallow depth of test to ensure you have a robust and fast test automation layer. What's also interesting about this separation is that it allows you to ruse the code between your stepDefs and your unit/integration tests.
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
As mentioned above, ATDD and BDD are complimentary and not comparable. On the point of imperative/declarative, specification by example as a technique is very specific. When you are performing the deliberate discovery phase, you always as the question "can you give me an example". In that example, you would use exact values. If there are two values that can be used in the preconditions (Given) or event (When) steps, and they have different outcomes (Then step), it means you have two different scenarios. If the have the same outcome, it's likely the same scenario. Therefore as part of the BDD practice, the steps need to be declarative as to gain the benefits of deliberate discovery.
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
It's worth it if you are working in a team where you want to solve the problem of miscommunication. One of the reasons people fail with BDD is because the writing and automation of features is lefts to the developers and the QA's, and the artifacts are no longer coherent as living specifications, they are just test scripts.
Test scripts tell you how a system does a particular thing but it does not tell you why.
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
It's about using the right tool for the right job. Using Cucumber for writing unit tests or automated test scripts is like using a hammer to put a screw into wood. It might work, but it's never pretty and it's always painful!
On the subject of tools, your typical business analyst / product owner is not going to have the knowledge needed to peek into your source control and work with you on adding / modifying specs. We created a commercial tool to fix this problem by allowing your whole team to collaborate over specifications in the cloud and stays in sync (realtime) with your repository. Check out Simian.
I have also answered a question about BDD here that may be of interest to you that focuses more on development:
Should TDD and BDD be used in conjunction?
Cucumber and Selenium are two popular technologies. Most of the organizations use Selenium for functional testing. These organizations which are using Selenium want to integrate Cucumber with selenium as Cucumber makes it easy to read and to understand the application flow.    Cucumber tool is based on the Behavior Driven Development framework that acts as the bridge between the following people: 
Software Engineer and Business Analyst. 
Manual Tester and Automation Tester. 
Manual Tester and Developers. 
Cucumber also benefits the client to understand the application code as it uses ​Gherkin language which is in Plain Text. Anyone in the organization can understand the behavior of the software.  The syntax's of Gherkin is in the simple text which is ​readable and understandable​.

Is there a tool/simulator that supports line coverage for SystemVerilog classes?

I have some self-testing code for my SystemVerilog component and I want to ensure that my tests cover everything, especially the failure cases in my classes. All I need is line/branch coverage, just like what is normally used for other object oriented languages such as Java.
I tried using VCS (version 2012.06) coverage, and I found it only has a limited support for SystemVerilog, and does not support any coverage for SystemVerilog classes. Is there any simulator or tool that has this support?
The Certitude tool by SpringSoft (just purchased by Synopsys) is a tool which checks the effectiveness of your testbench. It essentially analyzes coverage of your testbench code and does a whole lot more.
http://www.springsoft.com/products/functional-qualification/certitude
2012/08/25
Until further notice, the answer is:
No, there is no tool/simulator that supports line coverage for SystemVerilog classes.
I'd have thought Modelsim's or Aldec's coverage would do what you need. To be honest, it looks like VCS does too, so maybe the other tools have the same flaws?
I have tried that new feature in Mentor Questasim simulator. They have implemented SV (systemverilog) class code coverage from Modelsim/Questa 10.2 on.
To activate that feature in a systemverilog file/class you need to:
Example :
vlog +cover my_design.sv
vsim –voptargs=+acc –coverage mydesign
vcover may take the following specifications .When no specification is mention , +vcover is equivalent with “+vcover=bcesft”.
b — Collect branch statistics.
c — Collect condition statistics. Collects only FEC statistics, unless -coverudp is specified.
e — Collect expression statistics, Collects only FEC statistics, unless -coverudp is specified.
s — Collect statement statistics.
t — Collect toggle statistics. Overridden if ‘x’ is specified elsewhere
x — Collect extended toggle statistics .This takes precedence, if ‘t’ is specified elsewhere.
f — Collect Finite State Machine statistics.
I've found covered, but didn't use it myself. It's open source, that's a plus, but seems not to be in development since 2010... :-/

Good APIs for scope analyzers

I'm working on some code generation tools, and a lot of complexity comes from doing scope analysis.
I frequently find myself wanting to know things like
What are the free variables of a function or block?
Where is this symbol declared?
What does this declaration mask?
Does this usage of a symbol potentially occur before initialization?
Does this variable potentially escape?
and I think it's time to rethink my scoping kludge.
I can do all this analysis but am trying to figure out a way to structure APIs so that it's easy to use, and ideally, possible to do enough of this work lazily.
What tools like this are people familiar with, and what did they do right and wrong in their APIs?
I'm a bit surprised at at the question, as I've done tons of code generation and the question of scoping rarely comes up (except occasionally the desire to generate unique names).
To answer your example questions requires serious program analysis well beyond scoping. Escape analysis by itself is nontrivial. Use-before-initialization can be trivial or nontrivial depending on the target language.
In my experience, APIs for program analysis are difficult to design and frequently language-specific. If you're targeting a low-level language you might learn something useful from the Machine SUIF APIs.
In your place I would be tempted to steal someone else's framework for program analysis. George Necula and his students built CIL, which seems to be the current standard for analyzing C code. Laurie Hendren's group have built some nice tools for analyzing Java.
If I had to roll my own I'd worry less about APIs and more about a really good representation for abstract-syntax trees.
In the very limited domain of dataflow analysis (which includes the uninitialized-variable question), João Dias and I have adapted some nice work by Sorin Lerner, David Grove, and Craig Chambers. Only our preliminary results are published.
Finally if you want to generate code in multiple languages this is a complete can of worms. I have done it badly several times. If you create something you like, publish it!

Resources