Corda view consumed states in terminal - rpc

is there an easy way to view the consumed states in the terminal with the CordaRPCOps interface? It seems that vaultQuery returns unconsumed states by default and I can't figure out how to use vaultQueryBy or anything with the criteria.
I know that there should be consumed states because I can see them with H2

Hi you could always write a short API to expose the states:
there is a sample for /asset in corda existing samples:
here is a code snippet api for your scenario:
#GET
#Path("asset")
#Produces(MediaType.APPLICATION_JSON)
fun getAssets(): List<StateAndRef<ContractState>> {
val consumedCriteria = QueryCriteria.VaultQueryCriteria(Vault.StateStatus.CONSUMED)
return services.vaultQueryBy<ContractState>(consumedCriteria).states
}

As Ricky says, you'll have to provide an API or write a client to speak to your CorDapp via RPC (e.g. https://github.com/corda/cordapp-example/blob/release-V1/kotlin-source/src/main/kotlin/com/example/client/ExampleClientRPC.kt).
In theory, run vaultQueryByCriteria contractStateType: com.example.state.IOUState, criteria: { Vault.StateStatus.CONSUMED } could work. However, in vaultQueryByCriteria, the criteria parameter is of type QueryCriteria, which is an abstract class. There is no way currently in the shell to specify which concrete subclass of QueryCriteria you wish to use.
I have raised an issue here: https://github.com/corda/corda/issues/2351.

Related

Spring Integration support for the Normalizer EIP

Spring Integration here. I was expecting to see a normalize(...) method off the IntegrationFlow DSL and was surprised to find there wasn't one (like .route(...) or .aggregate(...), etc.).
In fact, some digging on Google and the Spring Integration docs, and I can't seem to find any built-in support for the Normalizer EIP. So I've taken a crack at my own:
public class Normalizer extends AbstractTransformer {
private Class<?> targetClass;
private GenericConverter genericConverter;
public Normalizer(Class<?> targetClass, GenericConverter genericConverter) {
Optional<GenericConverter.ConvertiblePair> maybePair = genericConverter.getConvertibleTypes().stream()
.filter(convertiblePair -> !convertiblePair.getTargetType().equals(targetClass))
.findAny();
assert(maybePair.isEmpty());
this.targetClass = targetClass;
this.genericConverter = genericConverter;
}
#Override
protected Object doTransform(Message<?> message) {
Object inbound = message.getPayload();
return genericConverter.convert(inbound, TypeDescriptor.forObject(inbound), TypeDescriptor.valueOf(targetClass));
}
}
The idea is that Spring already provides the GenericConverter SPI for converting multiple source types to 1+ target type instance. We just need a specialized flavor of that that has the same target type for all convertible pairings. So here we extend AbstractTransformer and pass it one of these GenericConverters to use. During initialization we just verify that all the possible convertible pairs convert to the same targetClass specified for the Normalizer.
The idea is I could instantiate it like so:
#Bean
public Normalizer<Fizz> fizzNormalizer(GenericConverter fizzConverter) {
return new Normalizer(Fizz.class, fizzConverter);
}
And then put it in a flow:
IntegrationFlow someFlow = IntegrationFlows.from(someChannel())
.transform(fizzNormalizer())
// other components
.get();
While I believe this will work, before I start using it too heavily I want to make sure I'm not overlooking anything in the Spring Integration framework that will accomplish/satisfy the Normalizer EIP for me. No point in trying to reinvent the wheel and all that jazz. Thanks for any insight.
If you take a closer look into that EI pattern, then you see:
Use a Normalizer to route each message type through a custom Message Translator so that the resulting messages match a common format.
The crucial part of this pattern that it is a composed one with a router as input endpoint and a set of transformers for each inbound message type.
Since it is that kind of component which is data model dependent and more over the routing and transforming logic might differ from use-case to use-case, it is really hard to make an out-of-the-box single configurable component.
Therefore you need to investigate what type of routing you need to do to chose a proper one for input: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router
Then for every routed type you nee to implement respective transformer to produce a canonical data mode.
All of the can be just wrapped into a #MessagegingGateway API to hide the normalize behind so-called pattern implementation.
That's what I would do to follow that EI pattern recommendations.
However if your use-case is so simple as just convert from one type to another, so yes, then you can rely on the ConversionService. You register your custom Converter: https://docs.spring.io/spring-integration/docs/current/reference/html/endpoint.html#payload-type-conversion. And then just use a .convert(Class) API from IntegrationFlowDefinition.
But again: since there is no easy way to cover all the possible domain use-cases, we cannot provide an out-of-the-box Normalizer implementation.

Any reason Origen::Parameter set contexts are hidden from the user?

Is there any reason the Origen::Parameters sets do not have a public method for retrieving all of the possible set ids? I do see a public method that works, though it isn't named like it is meant to be used publicly. Why is this not more visible?
[6] pry(#<PPEKit::Product>)> $dut.func._parameter_sets.ids
=> [:default,
:func_default,
:func_harvest_default,
EDIT
#Ginty, I tried your suggestion but it doesn't return the keys I am looking for. In the first sentence in the Parameter docs, the keys I am looking for are referred to 'parameter contexts'. The reason these would be useful would be to do something like this:
my_param_key = :my_param_key
if Origen.top_level.func.has_context? my_param_key
...
Specifically, I am creating parameter contexts from the information in my flow file and would like to verify that they exist before trying to access them. Essentially it is a handshake between my test flow and the test method parameters I am storing using unique (hopefully) parameter IDs/contexts.
thx
In your example, dut.func.params should return a hash-like object which contains all the parameter sets, so to get the IDs is just: dut.func.params.keys
EDIT
I see now that you want a collection containing the available contexts, but it doesn't seem like that is currently provided via an API.
I don't think there is any particular reason for that, probably hasn't been needed until now.
params.context returns the currently active context, I would recommend we add params.contexts and/or params.available_contexts to return an array of the available context names.
Origen now supports knowing the available parameter contexts.

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

U2 Toolkit for .NET - UniSession vs U2Connection

I'm struggling a bit with some of the base concepts of U2 Toolkit (and I've been quite successful with the previous version!).
First, I had to add using U2.Data.Client.UO; in order to reference UniSession or UniFile. This may just be general ignorance, but doesn't 'using U2.Data.Client' imply that I also want the .UO stuff under it?!?
Second - what (conceptually) are the differences between connecting via U2Connection's Open(), or UniSession's OpenSession()? Do each of them provide a different context in which to work?
Finally - while the examples provided in the doc and in Rajan's various articles are helpful, I'd like something a little more practical: how about a simple "here's how you read and write specific records in a Unidata file"?
Thanks!
Please see answer for the first and second questions
Regarding Namespace
If you want to develop application using ADO.NET ( SQL Access, UCI SERVER), you need one namespace (U2.Data.Client )
If you want to develop application using UO.NET ( Native Access, UO SERVER), you need two namespaces (U2.Data.Client and U2.Data.Client.UO)
U2.Data.Client namespace generally have Microsoft ADO.NET Specification Classes.
U2.Data.Client.UO namespace generally have UniObjects Native Specification Classes. As you have used in the past UODOTNET.DLL, you can feel all the Classes are there.
Regarding U2Connection/UniSession
This is by Design.
U2Connection.Open() calls UniSession.Open() when you use Accessmode=’Native’ in Connection String. You can verify from the LOG/TRACE File. In this case, basically, U2Connection and U2Session are same. U2Connection Class just passes connection string to UniSession Class and then UniSession Class uses this connection string and calls Open(). This is an improvement from the old way where you have used Static Class UniObjects(…) and there was no concept of standard connection string. Basically we replace Static Class UniObjects(…) to U2Connection Class and provided connection string capabilities.
U2Connection.Open() calls UCINET.Open() when you use Accessmode=’SQL’ in Connection String. You can verify from the LOG/TRACE File.
Is this clear()?

Use MEF to allow only 2 instances at max per application

I am using MEF as an IOC in my application. I found myself stuck in a situation where I need exactly two instance at a time for a class in my application (across all threads). I thought it would be easy just by adding export attribute twice with different container name and then using that container name to create two instances.
[Export("Condition-1",typeof(MyClass)]
[Export("Condition-2",typeof(MyClass)]
[PartCreationPolicy(System.ComponentModel.Composition.CreationPolicy.Shared)]
public class MyClass { }
And then export them as
Container.GetExport<MyClass>("Condition-1").Value
Container.GetExport<MyClass>("Condition-2").Value
But this trick did not work. I finally able to solve my problem by using CompsositionBatch
cb.AddExportedValue<MyClass>("Condition-1",new MyClass());
cb.AddExportedValue<MyClass>("Condition-2",new MyClass());
But my question is, Why am I not able to get different instances on the basis of Contract Name. Is it right that Contract name does not matter if CreationPolicy is shared?
The problem is in the setup of PartCreationPolicyAttribute that decorates MyClass.
CreationPolicy.Shared means that a single instance will be returned for each call to Container.GetExport. It is like a singleton. What you need in your case is the CreationPolicy.NonShared policy which will return a different instance for each clla to Container.GetExport.
Here's a good article on Part Creation Policy.
Also have a look at ExportFactory in MEF 2 for MEF2 additions regarding the lifetime and sharing of parts

Resources