How to have multiple Smart Contracts into a single Chaincode? - hyperledger-fabric

I need to split my business logic into two Smart Contracts, A and B, where A adds some data on the ledger and B read's A's data directly from the ledger, for some calculation.
I need this split because:
A and B will have different endorsement policies
B's calculation transaction security should rely on "read-set" mechanism for validation, which I know is enforced for data that B reads directly from the ledger but I'm not sure for data read from a cross-chaincode call (and I can't find info about it)
So...
The guide says
Multiple smart contracts can be defined within the same chaincode. When a chaincode is deployed, all smart contracts within it are made available to applications.
But I really can't find a reference on how to bundle multiple Smart Contracts into a single Chaincode (necessary thing in order to let B read A's data).
The best for my project would be having A and B in different languages (respectively, Javascript and Java) but if they need to be written in the same language to fit in the same chaincode I could rewrite the first one.
...
Can somebody help me? I actually just need a reference to explain me how to bundle multiple Smart Contracts into a Chaincode (
or an example, couldn't find that either)

Related

DDD Aggregates vs Entities

What to do with an object that has two dependencies:
Let's say we have three objects: client, company and a contract.
Contract needs a client and a company to exist.
Naturally, business wise, the contract belongs more to the client than it does to the company, however the companies provides the contract to the client.
For now, I have all three as a separate aggregate root. Because you should be able to quickly query the existing contracts for a specific company as well. If contract would be an entity under the client aggregate root, I'd need to query all the clients which have a contract of X company and then return a flattened list of those contracts. Which seemed a bit odd?
Secondly, contract itself has a lot of entities, with more entities below them.
To explain the hierarchy in a simple way:
Contract aggregates contains a list of entity A, entity A has multiple items of entity B and entity B has multiple items of entity C. So it's a deep structure, which all have to be exposed through the aggregate above it.
If I'd put the contract aggregate root as an entity below client, my client aggregate needs to carry all those extra methods for what's below contract as well. And soon I'll end up with almost everything under the same aggregate.
So my question is: what questions can I ask myself to answer this kind of issue? There's probably no right or wrong, but there should be some guidelines on how to deal with an issue like this?
Thanks!
what questions can I ask myself to answer this kind of issue?
Here is how Eric Evans defined AGGREGATE
An aggregate is a cluster of associated objects that we treat as a unit for the purpose of data changes.
"Change" is the important idea here; in designing our aggregate boundaries, we don't particularly care about data that appears in the same report (read-only view), we care instead about what data needs be to considered when making changes.
See also Mauro Servienti: All our aggregates are wrong.

How a chaincode consist with multiple smart contracts in go lang

How a chaincode consist with multiple smart contracts in go lang? for the go only has one main function.
When it comes to Go chaincode, there can be any number of smart contracts. There are 3 main functions:
Init: called on instantiation or upgrade of the chaincode. It is usually used to initialise data.
Invoke: called on every transaction. In this function, you can define which functions are called when some particular arguments are passed, thus, making it possible to have multiple smart contracts in one go file.
main: It starts the chaincode in a container on instantiating it.
Here is a detailed tutorial explaining how to write Go smart contracts: https://hyperledger-fabric.readthedocs.io/en/release-2.0/chaincode4ade.html#vendoring

How to handle hard aggregate-wide constraints in DDD/CQRS?

I'm new to DDD and I'm trying to model and implement a simple CRM system based on DDD, CQRS and event sourcing to get a feel for the paradigm. I have, however, run in to some difficulties that I'm not sure how to handle. I'm not sure if my difficulties stem from me not having modeled the domain properly or that I'm missing something else.
For a basic illustration of my problems, consider that my CRM system has the aggregate CustomerAggregate (which seems reasonble to me). The purpose of this aggregate is to make sure each customer is consistent and that its invarints hold up (name is required, social security number must be on the correkct format, etc.). So far, all is well.
When the system receives a command to create a new customer, however, it needs to make sure that the social security number of the new customer doesn't already exist (i.e. it must be unique across the system). This is, of cource, not an invariant that can be enforced by the CustomerAggregate aggregate since customers don't have any information regarding other customers.
One suggestion I've seen is to handle this kind of constraint in its own aggregate, e.g. SocialSecurityNumberUniqueAggregate. If the social security number is not already registered in the system, the SocialSecurityNumberUniqueAggregate publishes an event (e.g. SocialSecurityNumberOfNewCustomerWasUniqueEvent) which the CustomerAggregate subscribes to and publishes its own event in response to this (e.g. CustomerCreatedEvent). Does this make sense? How would the CustomerAggregate respond to, for example, a missing name or another hard constraint when responding to the SocialSecurityNumberOfNewCustomerWasUniqueEvent?
The search term you are looking for is set-validation.
Relational databases are really good at domain agnostic set validation, if you can fit the entire set into a single database.
But, that comes with a cost; designing your model that way restricts your options on what sorts of data storage you can use as your book of record, and it splits your "domain logic" into two different pieces.
Another common choice is to ignore the conflicts when you are running your domain logic (after all, what is the business value of this constraint?) but to instead monitor the persisted data looking for potential conflicts and escalate to a human being if there seems to be a problem.
You can combine the two (ex: check for possible duplicates via query when running the domain logic, and monitor the results later to mitigate against data races).
But if you need to maintain an invariant over a set, and you need that to be part of your write model (rather than separated out into your persistence layer), then you need to lock the entire set when making changes.
That could mean having a "registry of SSN assignments" that is an aggregate unto itself, and you have to start thinking about how much other customer data needs to be part of this aggregate, vs how much lives in a different aggregate accessible via a common identifier, with all of the possible complications that arise when your data set is controlled via different locks.
There's no rule that says all of the customer data needs to belong to a single "aggregate"; see Mauro Servienti's talk All Our Aggregates are Wrong. Trade offs abound.
One thing you want to be very cautious about in your modeling, is the risk of confusing data entry validation with domain logic. Unless you are writing domain models for the Social Security Administration, SSN assignments are not under your control. What your model has is a cached copy, and in this case potentially a corrupted copy.
Consider, for example, a data set that claims:
000-00-0000 is assigned to Alice
000-00-0000 is assigned to Bob
Clearly there's a conflict: both of those claims can't be true if the social security administration is maintaining unique assignments. But all else being equal, you can't tell which of these claims is correct. In particular, the suggestion that "the claim you happened to write down first must be the correct one" doesn't have a lot of logical support.
In cases like these, it often makes sense to hold off on an automated judgment, and instead kick the problem to a human being to deal with.
Although they are mechanically similar in a lot of ways, there are important differences between "the set of our identifier assignments should have no conflicts" and "the set of known third party identifier assignments should have no conflicts".
Do you also need to verify that the social security number (SSN) is really valid? Or are you just interested in verifying that no other customer aggregate with the same SSN can be created in your CRM system?
If the latter is the case I would suggest to have some CustomerService domain service which performs the whole SSN check by looking up the database (e.g. via a repository) and then creates the new customer aggregate (which again checks it's own invariants as you already mentioned). This whole process - the lookup of existing SSN and customer creation - needs to happen within one transaction to to ensure consistency. As I consider this domain logic a domain service is the perfect place for it. It does not hold data by itself but orchestrates the workflow which relates to business requirements - that no to customers with the same SSN must be created in our CRM.
If you also need to verify that the social security number is real you would also need to perform some call the another service I guess or keep some cached data of SSNs in your CRM. In this case you could additonally have some SocialSecurityNumberService domain service which is injected into the CustomerService. This would just be an interface in the domain layer but the implementation of this SocialSecurityNumberService interface would then reside in the infrastructure layer where the access to whatever resource required is implemented (be it a local cache you build in the background or some API call to another service).
Either way all your logic of creating the new customer would be in one place, the CustomerService domain service. Additional checks that go beyond the Customer aggregate boundaries would also be placed in this CustomerService.
Update
To also adhere to the nature of eventual consistency:
I guess as you go with event sourcing you and your business already accepted the eventual consistency nature. This also means entries with the same SSN could happen. I think you could have some background job which continually checks for duplicate entries and depending on the complexity of your business logic you might either be able to automatically correct the duplicates or you need human intervention to do it. It really depends how often this could really happen.
If a hard constraint is that this must NEVER happen maybe event sourcing is not the right way, at least for this part of your system...
Note: I also assume that command de-duplication is not the issue here but that you really have to deal with potentially different commands using the same SSN.

Which hyperledger fabric chaincode methods are actual transactions

Looking at the marbles example from the fabric samples, specific at the node.js version of the chaincode in the marbles_chaincode.js file, the function async getAllResults(iterator, isHistory) is clearly a helper function and not an actual transaction (at least this is what I could understand from looking at the code). Which functions are proper transactions and which are just helper methods?
you are correct, getAllResults is just a helper function. This particular example isn't the best sample in the world and doesn't obviously distinguish which are transactions that can be called and which are helper methods. You need to understand the code to determine which are transactions. For example because the code uses a generic dispatcher implementation call the right method from an invoke you can eliminate all methods that don't follow the exact same signature of (stub, args, thisClass) then it isn't meant to be externally called. However that doesn't guarantee this but helps to at least provide an initial subset
For actual transactions, we have to use stub.putState so that we can update the ledger. For query any data we have to use stub.getState where query is not considered as transaction because there is no update in ledger due to query.
So in Hyperledger Fabric, transactions are happened with the changes of world state of ledgers or invoking the chaincode and so we use stub.putState to put new information on ledger.
So if you find stub.putState in any function, you can consider that function as proper transaction function.

DDD Composing Multiple Bounded Contexts

I would like your advices about bounded contexts integration.
I have a usecase which put me in a corner :
I have a bounded context for Contract management. I can add parties (various external organizations for example) to a contract. Select for each party their investment / contribution (ex: 10% of the total). SO contract management is two-fold : one is administrative (add party, manage multiples dates, ...) the other one is financial (plan their contributions that span multiple years, check contributions consumption, ...).
I have another bounded context for Budget. This context is responsible for expenses management at the organisation level. Example: a service A will have 1000 € of expense capacity. We can plan a budget and after that each organisation party can consume, buying stuff, their part. In order to build a budget, the user in charge of the enterprise budget can allocate money directly or integrate a yearly contract financial component. When we integrate a contract part inside the budget we froze the data inside the budget, i.e we copy the monetary data from one database table inside another one (adding some audit informations). We have a single database.
It is this last part I struggle with. Each bounded context is a dedicated application. In the budget application, after a contract part has been integrated inside the current budget, I need to display the budget details lines. Unfortunately in the budget tables I have only the money data and not some basic info about the contract (object, reference, ...).
What am I thinking :
sometimes is not bad to duplicate data between bounded contexts. I froze the money part of a contract. I can also freeze / duplicate the object and reference of the contract. Then the querying will only take place inside the budget context. But what is problematic here is the data duplication. Today I need object /refrerence and if tomorrow I need more fields ... I will need domain events management to keep the data between contract / budget in sync.
querying budget and for each line query a contract service that will return the data needed. That keep each context autonomous but I need to make lots of database requests to enrich the budget details line objects.
with only one join at the database level we can make this work. What about coupling here ? It is the simple solution and what we are doing today (is it a shared kernel ?). It seems we can't afford to change contract structure without rebuilding the budget application. I don't have a programmatic contract between the contexts.
My question is :
How can I build this UI screen that need data from the budget context and each details line need data from the contract context ?
Side Notes :
Perhaps the contexts identification and perimetre are wrong from the start (it is a legacy design).
What I would like is to keep the context separate (loose coupling). If we can specify design contracts between the contexts, the maintenance is easier (or not ?).
I failed to see how to integrate these contexts (I need to re-read shared kernel, ustream / downstream etc).
This is an additional, distinct bounded context. It has some overlap with the existing bounded contexts, which can easily lead you down the wrong path (merging contexts or putting additional behaviour in a context where it doesn't belong).
Sometimes it's OK to have entities in different bounded contexts which are referring to the same logical entity, but which are just providing a different view of that entity for the purposes of a specific scenario (eg in a specific context).
A good example of this is in an e-commerce scenario. In most e-commerce applications you will have the concept of an Order, but there is no global, definitive notion of what an "order" is. In a finance context - the order is simply an invoice. In a fulfilment context - the order is simply a packing list and an address to send the goods to. In a marketing context - the order represents a little piece of intelligence about what the customer is interested in, which can be used for future targeted marketing.
There is a thread of commonality which runs through all of those entities, but you would likely see at least 3 separate Order classes, each one capturing the concept of an order within a context.
And so in your case, you have a bounded context for Contract and a bounded context for Budget. It seems to me that you now have another way of looking at these entities, and specifically the way in which they interact with each other. This is a new view of the entities, a view which can be captured in its own context. This new context will likely have its own Contract and Budget entities, and there will be overlap with the Context and Budget contexts, but there will also be additional relationships and behaviour in there, which wouldn't make sense in those other contexts.
This is a really difficult idea to explain :) I wrote an answer to a similar question some time ago here: DDD - How to design associations between different bounded contexts

Resources