How a chaincode consist with multiple smart contracts in go lang - hyperledger-fabric

How a chaincode consist with multiple smart contracts in go lang? for the go only has one main function.

When it comes to Go chaincode, there can be any number of smart contracts. There are 3 main functions:
Init: called on instantiation or upgrade of the chaincode. It is usually used to initialise data.
Invoke: called on every transaction. In this function, you can define which functions are called when some particular arguments are passed, thus, making it possible to have multiple smart contracts in one go file.
main: It starts the chaincode in a container on instantiating it.
Here is a detailed tutorial explaining how to write Go smart contracts: https://hyperledger-fabric.readthedocs.io/en/release-2.0/chaincode4ade.html#vendoring

Related

Which hyperledger fabric chaincode methods are actual transactions

Looking at the marbles example from the fabric samples, specific at the node.js version of the chaincode in the marbles_chaincode.js file, the function async getAllResults(iterator, isHistory) is clearly a helper function and not an actual transaction (at least this is what I could understand from looking at the code). Which functions are proper transactions and which are just helper methods?
you are correct, getAllResults is just a helper function. This particular example isn't the best sample in the world and doesn't obviously distinguish which are transactions that can be called and which are helper methods. You need to understand the code to determine which are transactions. For example because the code uses a generic dispatcher implementation call the right method from an invoke you can eliminate all methods that don't follow the exact same signature of (stub, args, thisClass) then it isn't meant to be externally called. However that doesn't guarantee this but helps to at least provide an initial subset
For actual transactions, we have to use stub.putState so that we can update the ledger. For query any data we have to use stub.getState where query is not considered as transaction because there is no update in ledger due to query.
So in Hyperledger Fabric, transactions are happened with the changes of world state of ledgers or invoking the chaincode and so we use stub.putState to put new information on ledger.
So if you find stub.putState in any function, you can consider that function as proper transaction function.

How to have multiple Smart Contracts into a single Chaincode?

I need to split my business logic into two Smart Contracts, A and B, where A adds some data on the ledger and B read's A's data directly from the ledger, for some calculation.
I need this split because:
A and B will have different endorsement policies
B's calculation transaction security should rely on "read-set" mechanism for validation, which I know is enforced for data that B reads directly from the ledger but I'm not sure for data read from a cross-chaincode call (and I can't find info about it)
So...
The guide says
Multiple smart contracts can be defined within the same chaincode. When a chaincode is deployed, all smart contracts within it are made available to applications.
But I really can't find a reference on how to bundle multiple Smart Contracts into a single Chaincode (necessary thing in order to let B read A's data).
The best for my project would be having A and B in different languages (respectively, Javascript and Java) but if they need to be written in the same language to fit in the same chaincode I could rewrite the first one.
...
Can somebody help me? I actually just need a reference to explain me how to bundle multiple Smart Contracts into a Chaincode (
or an example, couldn't find that either)

How to use sagas in a CQRS architecture using DDD?

I am designing a CQRS application using DDD, and am wondering how to implement the following scenario:
a Participant aggregate can be referenced by multiple ParticipantEntry aggregates
an AddParticipantInfoCommand is issued to the Command side, which contains all info of the Participant and one ParticipantEntry (similar to an Order and one OrderLineItem)
Where should the logic be implemented that checks whether the Participant already exists and if it doesn't exist, creates the Participant?
Should it be done in a Saga that first checks the domain model for the existence of the Participant, and if it doesn't find it, issues an AddParticipantCommand and afterwards an AddParticipantEntry command containing the Participant ID?
Should this be done entirely by the aggregateroots in the domain model itself?
You don't necessarily need sagas in order to deal with this situation. Take a look at my blog post on why not to create aggregate roots, and what to do instead:
http://udidahan.com/2009/06/29/dont-create-aggregate-roots/
Where should the logic be implemented that checks whether the Participant already exists and if it doesn't exist, creates the Participant?
In most instances, this behavior should be under the control of the Participant aggregate itself.
Processes are useful when you need to coordinate changes across multiple transaction boundaries. Two changes to the same aggregate, however, can be managed within the same transaction.
You can implement this as two distinct transactions operating on the same aggregate, with coordination; but the extra complexity of a process doesn't offer any gains. It's much simpler to send the single command to the aggregate, and allow it to decide what actions to take to maintain the correct invariant.
Sagas, in particular, are a pattern for reverting multiple transactions. Yan Cui's How the Saga Pattern manages failures with AWS Lambda and Step Functions includes a good illustration of a travel booking saga.
(Note: there is considerable confusion about the definition of "saga"; the NServiceBus community tends to understand the term a slightly different way than originally described by Garia-Molina and Salem. kellabyte's Clarifying the Saga Pattern surveys the confusion.)

Event Sourcing organization of streams and Aggregates

what would be the best way to organize my event streams in ES. With event stream I mean all events to an aggregate.
Given I have a project with some data and a list of tasks.
Right now I have a Guid as AggregateID as my streamID.
So far I can
-> recreate the state for a given project with that ID
-> I can assemble a list of projects with a custom projection
The question would be how to handle todos?
should this also be handled below the project stream id or should it have it's own todo stream id?
If a todo has it's separate stream how would one link it to the owning project. How is the project aware of all the todo streams for a given project.
Meaning all changes to the todo list should be also recognized as Commands and Events (more Events) in the project.
And if I also want to allow free todo's without a relation to a project. Does it require to have its own type and stream to handle freeTodo on top. And the list of all todos whether project related or not would be a projection of all todo and freeTodo related streams?
So I guess the main question is how do I handle nested aggregates and how would one define the event store streams and the linking for that?
Any tips, tricks, best practises or resources will be highly appreciated.
// EDIT Update
First of all thank you #VoiceOfUnreason for taken your time to answer this question in great detail. I added the tag DDD because I got that strange feeling it correlates with the bounded context question which is most of the times no black or white decision. Obviously the domain has more depth and details, I simplified the exampled. Down below I shared some more details which got me questioning.
In my first thought I defined an aggregate for todo as well with a property for the project id. I defined this project property as option type (Nullable) to cover the difference between project related and free todo's. But following use cases/ business rules got me rethinking.
The system should also contains free todo's which allows the user to schedule personal tasks not project related (do HR training, etc). All todo's should appear either in their projects or in a complete todo list (both project related and free).
A project can only be finished/closed if all todo's are completed.
This would mix somehow information from aggregate project with information from aggregate todo. So no clear bounds here. My thoughts would be: a) I could leverage the todo read model in the project aggregate for validation. b) define some sort of listed structures for todo's within the project aggregate scope (if so how). This would handle a todo within the context of project and defines clear bounds
c) Have some sort of service which provides todo infos for project validation which somehow refers to point a.).
And all feels really coupled =-/
It would be great if you or someone finds the time to share some more details and opinions here. Thanks a million.
Reminder: the tactical patterns in ddd are primarily an enumeration of OO best practices. If it's a bad idea in OO, it's probably a bad idea in DDD.
the main question is how do I handle nested aggregates
You redesign your model.
Nested aggregates are an indication that you've completely lost the plot; aggregate boundaries should never overlap. Overlapping boundaries are analogous to an encapsulation violation.
If a todo has it's separate stream how would one link it to the owning project.
The most likely answer is that the Todo would have a projectId property, the value of which usually points to a project elsewhere in the system.
How is the project aware of all the todo streams for a given project.
It isn't. You can build read models that compose the history of a project and the history of the todos to produce a single read-only structure, but the project aggregate -- which is responsible for insuring the integrity of the state within the boundary -- doesn't get to look inside the todo objects.
Meaning all changes to the todo list should be also recognized as Commands and Events (more Events) in the project.
No, if they are separate aggregates, then the events are completely separate.
Under some circumstances, you might use the values written in an event produced by the todo as arguments in a command dispatched to the project, or vice versa, but you need to think of them as separate things having a conversation that may, or may not, ever come to agreement.
Possibilities: it might be that free standing todo items are really a different thing from the todo items associated with a project. Check with your domain experts -- they may have separate terms in the ubiquitous language, or in discussing the details you may discover that they should have different terms in the UL.
Alternatively, todo's can be separate aggregates, and the business adapts to accept the fact that sometimes the state of project and the state of the todo don't agree. Instead of trying to prevent the model from entering a state where the aggregates disagree, you detect the discrepancy and mitigate the problem as necessary.

Testing model state: look into db or make use of appropriate methods?

So, I want to test how my model Queue performs adding an Item. I need to complete the following steps:
Clear the entire queue
Add an item into the queue
Look for the item in the queue
The queue uses MongoDB internally.
It seems I have the following options:
(a) Clear the queue's collection executing an appropriate MongoDB command (db.queue.remove()), call queue.add (item), then check the colleciton state (db.queue.find() or db.queue.find());
(b) Clear the queue with queue.clear(), then call queue.add(item), then check queue.count().
What is the difference between theese options and what are the reasons to choose one of them instead of another? (a) looks more "functional" but introduces some brittleness and code duplication among code and tests (is this an issue, btw?), but (b) makes me feel I test everything but nothing special at the time.
Use (b). Your unit tests should not make assumptions about the internal implementation details of the class under test. "Test behavior, not implementation". Using (a) will result in brittle tests that may fail if the internal implementation changes.
Finally I ended with the following.
The (a) and (b) approaches differ in coupling to the SUT. The (a) is less coupled and this is the round-trip test, and the (b) is tightly coupled and thus it's a layer-crossing test.
The first one follows the so called Use the Front Door First Principle, the second follows the Back Door Manipulation Principle.
The first approach focuses on the public contract of the class. The second - on its implementation.
We need both kinds of tests. The first to develop the interface of the class, the second - to drive its implementation.
More details on importance of the stong separation of these kinds of tests could be found in this nice article.

Resources