Access/Use default flow methods - origen-sdk

We have test interface created that we use with all our origen flows that has test flow methods defined. I believe there's possibly an issue with our test methods and to debug I'd like to use the default func method rather than the one defined in our interface.
Where would I find this method and what is the correct way to integrate it?
Thanks

func is not a flow method provided by Origen, rather it is a method that is commonly used in examples to indicate the notion of "a functional test".
The underlying API provided by Origen is flow.test(test_object, options), and within most test program source code the flow will be inferred, so often you will see code that simply calls test.
The test object can be just a name, or it can be an object representing the test like a TestInstance or TestSuite object provided by the Teradyne and Advantest APIs respectively.
You can create a test program flow (and documentation for it) by using only the test method, you can see examples of that in the source code for this video on how to create test program flows: http://origen-sdk.org/origen/videos/5-create-program-flow/
A test program however, is comprised of more than just a flow, and you would normally also want the other files that define what the test is to be generated in addition to the flow.
In time, we hope that the Origen community will produce libraries of off-the-shelf methods like func which would generate the complete test as well as inserting into the flow, currently though it is the application's responsibility to create such methods within their test program interface.
See this example source code for how to create a func method that can target multiple tester types - http://origen-sdk.org/origen/videos/6-create-program-tests/
To start with, you shouldn't worry about the multiple tester aspect, just refer to this guide for the APIs that are available to generate the test program artefacts for creating tests on the V93K - http://origen-sdk.org/origen/guides/program/v93k/

Related

Are mocks and stubs implementation details?

I have read that with TDD we should approach the entity (function, class etc.) under test from the perspective of the user/caller of the entity. The gist being focusing on the public "interface". This in turn would drive the design and help reason about the design earlier.
But when we need to introduce mocks and stubs into our tests, isn't that an implementation detail?
Why would/should the "user" care about the other entities that are supposed to be there?
E.g.
How to start writing a test for the PlaceOrder service which should check with the credit card service if the user has enough money? Putting a mock for the credit card service whilst writing a test from the perspective of the PlaceOrder client looks out of place now - because it is an implementation detail; our PlaceOrder may call the credit card for each user or it can simply have a cache with scores provided at the creation time.
It's not clear-cut. As a catch-phrase says: Tests are specifications.
When you use Test Doubles you are, indeed, specifying how your System Under Test (SUT) ought to interact with its dependencies.
I agree that this is usually an implementation detail, but there will typically be a few dependencies of a more architectural character.
A common example is an email gateway. If your SUT should send email, that's an observable side effect that the user (or some other stakeholder) cares about.
While you can (and perhaps should?) also run full systems tests that verify that certain conditions produce real emails that land in certain real mailboxes, such test cases are difficult to automate.
Inserting a Test Double that can take the place of an email gateway and verify that the correct message was delivered to the gateway is not only an implementation detail, but an important part of the overall system. In such cases, using a Test Double makes sense.
Yes, Test Doubles specify behaviour, but sometimes, that's exactly what you want.
How much you should rely on this kind of design is an architectural choice. In addition to sending emails, you might choose to explicitly specify that a certain SUT ought to place a message on a durable queue.
You can create entire systems based on asynchronous messaging, which could imply that it'd be architecturally sound to let tests rely on Test Doubles.
In short, I find it a useful heuristic to use Test Doubles for architectural components, and rely mostly on testing pure functions for everything else.
For something like an order service, I wouldn't let the order service contact the payment gateway. Rather, I'd implement the order service operations as pure functions, and either pass in a payment token as function arguments, or let the output of functions trigger a payment action.
The book Domain Modeling Made Functional contains lots of good information about this kind of architecture.
On the other hand, the book Growing Object-Oriented Software, Guided by Tests contains many good examples of how to use Test Doubles to specify desired behaviour.
Perhaps you'll also find my article From interaction-based to state-based testing useful.
In summary: Tests are specifications. Test Doubles are specifications. Use them to specify the observable behaviour of the system. Try to avoid using them to specify implementation details.
But when we need to introduce mocks and stubs into our tests, isn't that an implementation detail?
Yes, in effect. A bit more precisely, it is additional coupling between your test and the details of your test subject's implementation.
There are two ideas in tension here. On the one hand, we want that our tests are as representative as possible of how our system will actually work; and on the other hand we each of our tests to be an controlled experiment of our implementation, without coupling to shared mutable state.
In some cases, we can disguise some of the coupling by using inert substitutes for our dependency as the default case, so that our implementation classes are isolated unless we specifically opt into a shared configuration.
So for PlaceOrder, it might look like using a default CreditCardService that always answers "yes, the customer has enough money". Of course, that design only allows you to test the "yes" branch in your code - to test a "no" branch, you are necessarily going to need to know how to configure PlaceOrder with a CreditCardService that declines credit.
For more on this idea, see the doctrine of useful objects.
More generally, in TDD we normally take complicated designs that are hard to test and refactor them into a design where something really simple but hard to test collaborates with something that is complicated but easy to test.
But for that to work at all, the components need to be able to talk to each other, and if you are going to simulate that communication in a test you are necessarily going to be coupled to the "implementation detail" that is the protocol between them.
For the case where that protocol is stable, having tests coupled to those details isn't, of itself, a problem in practice. There's coupling, sure, and cost of change, but if the probability of change is negligible then the expected cost of that coupling is effectively nothing.
So the trick is identifying when our tests would require coupling to an unstable implementation protocol, and figuring out how to best mitigate the risk of change.

Codeigniter4 - Difference between Libraries, helper and model

I walk the first steps in codeigniter4.
Now I ask myself what are the big differences between a "Model" where I do at first all database related things, a "Helper" where I defined a set of functions or a "Libary"?
In which cases should I create my own libary, helper, model?
The CI4 Docu won't give me a answer for that, so I hope someone can explain it for me (and other ones)
The documentation is pretty straight forward when it comes to Models, there's really no caveats there. A Model is a class that represents a single table in the database and it provides to you a wide variety of related functionality: built-in CRUD methods, the ability to save Entities, transparent use of Query Builder methods, etc.
In general, you would typically have one Model class per database table that you're working with. That being said, you do not necessarily need Models in order to work with a database; however if you expect to need the functionality a Model provides, it can be used to save you the work of doing it yourself.
The documentation is indeed far more opaque on the differences between Libraries and Helpers. As such, I've found the most objective measure of difference to be in how CodeIgniter itself utilizes them, and the distinction is rather subtle:
Libraries provide their functionality through methods that exist within the namespace of the class they're defined in.
Helpers provide their functionality through functions that exist within the namespace of the importing class.
(NB: In PHP, a method is simply the name for a function that's defined within a class)
In other words, Libraries are typically non-static classes that (like all non-static classes) must be 'constructed' before use. The methods defined within that class reside within the namespace of the class itself, not the class they're called from.
For example, to gain access to the current Session object, I retrieve an instance of the Session Library class: $session = session(); Using $session, I can then invoke methods provided by that Session class, like $session->destroy().
On the other hand, Helpers are typically just a collection of functions that are imported into the namespace of the importing class itself. They are called in the current context and their use is not predicated upon the use of an object.
For example, if I loaded the Array Helper (helper('array');), it would grant me access to a handful of functions that I could call directly from the current context (e.g. $result = array_deep_search(...)). I didn't have to create an object first, that function was imported directly into my current namespace.
This distinction could matter for a couple of reasons, the biggest of which is probably naming conflicts. If you have two Helpers, each with an identically-named function, you can't import both of those functions at the same time. On the other hand, you could have one hundred different classes with the destroy() method defined, because each of those method definitions resides within the namespace of the defining class itself and is invoked through an instance of that specific class.
You may notice that all of CI's Helper functions are prefixed with the name of the Helper itself, e.g. 'array' or 'form'; this is to prevent the aforementioned naming conflict.
This probably doesn't answer when you want to use one or the other, and in truth that really comes down to your opinion. In the end, it's all just namespacing. You can put things wherever you want, as long as the code that needs it knows where to look for it. I personally try to follow CI's lead on things, so I tend to follow their distinction between the two.

MPS run Model2Model transformation and TextGen from action

I am developing a plugin for Schedulability Analysis for mbeddr. In order to run an external tool, I want to transform the mbeddr-model to the external tool's model and use the Textgen aspect in order to create an input file for the external tool. The Analysis will be started from an Action defined in a plugin solution (is this the smartest way?), so how do I trigger the M2M transformation and the textgen from a plugin action?
Thanks
You can have a look at the MakeActionImpl Class from MPS. It basically does it by creating a MakeSession it takes a list of models to generate and you can await the Future that is returned. You need fine grained control about the generators invoked and the facets involved have a look at the MakeUtils from com.mbeddr.core.runconfiguration.pluginSolution It's a good blueprint if you want get deeper integration with the MPS internals. It also contains a easy API to do the simple rebuild/make tasks.

Testing model state: look into db or make use of appropriate methods?

So, I want to test how my model Queue performs adding an Item. I need to complete the following steps:
Clear the entire queue
Add an item into the queue
Look for the item in the queue
The queue uses MongoDB internally.
It seems I have the following options:
(a) Clear the queue's collection executing an appropriate MongoDB command (db.queue.remove()), call queue.add (item), then check the colleciton state (db.queue.find() or db.queue.find());
(b) Clear the queue with queue.clear(), then call queue.add(item), then check queue.count().
What is the difference between theese options and what are the reasons to choose one of them instead of another? (a) looks more "functional" but introduces some brittleness and code duplication among code and tests (is this an issue, btw?), but (b) makes me feel I test everything but nothing special at the time.
Use (b). Your unit tests should not make assumptions about the internal implementation details of the class under test. "Test behavior, not implementation". Using (a) will result in brittle tests that may fail if the internal implementation changes.
Finally I ended with the following.
The (a) and (b) approaches differ in coupling to the SUT. The (a) is less coupled and this is the round-trip test, and the (b) is tightly coupled and thus it's a layer-crossing test.
The first one follows the so called Use the Front Door First Principle, the second follows the Back Door Manipulation Principle.
The first approach focuses on the public contract of the class. The second - on its implementation.
We need both kinds of tests. The first to develop the interface of the class, the second - to drive its implementation.
More details on importance of the stong separation of these kinds of tests could be found in this nice article.

Meta Programming, whats it good for?

So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.

Resources