Complex datatypes in HyperLedger Fabric - hyperledger-fabric

I read in out the documentation on HyperLedger. However, I cannot find any information on storing complex datatypes by that I mean if it is even possible. For instance lets say we have two objects: an author and a book. Is it possible to create a smartcontract that would look like this? (example in typescript) :
export class Book {
public ISBN: string;
public Title: string;
}
export class Author {
public firstName: string;
public lastName: string;
public publishedBooks: Array<Book>;
}
And if so how would querying would look like in such instance. On the other hand if it is not possible how would one model such data relations in HyperLedger.

Yes you can do this.
Implement it in the smart contract and use Hyperledger directives to query the ledger.
For example in Go you can use shim PutState and GetState to determine an entity given an ID.
If you implement a DB like CouchDB you can even do more complex and rich queries on your database.
[EDIT1] Answer improvement with example:
This is how I improved this in my Go Chaincode
type V struct {
Attribute string `json:"Attribute"`
Function string `json:"Function"`
Value string `json:"Value"`
}
type AV struct {
Vs []V `json:"Vs"`
CFs map[string]string `json:"CFs"`
}
As you can see, I am using V struct for an array of Vs.
This make my dataset more complex and this is inside the chaincode.
[EDIT 2] Answer improvement with query and put:
Adding a new entity is very easy. My examples are always in GoLang.
Send a JSON to the chaincode (thanks to the SDK) and next unmarshal it:
var newEntity Entity
json.Unmarshal([]byte(args[0]), &newEntity)
Now use the PutState function to put the new entity given his ID (in my case contained in the JSON file, id field):
entityAsBytes, _ := json.Marshal(newEntity)
err := APIstub.PutState(newEntity.Id, entityAsBytes)
And here you are done. If you now want to query the ledger retrieving that id, you can do:
entityAsByte, err := APIstub.GetState(id)
return shim.Success(entityAsByte)

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

DDD : Business Logic which need infra layer access should be in application service layer, domain service or domain objects?

For an attribute which need to be validated, lets say for an entity we have country field as VO
This country field needs to be validated to be alpha-3 code as per some business logic required by domain expert.
NOTE:
*We need to persist this country data as it can have other values also and possible in future there can be addition, updating and deleting of the country persisted data.
This is just one example using country code which may rarely change, there can be other fields which needs to be validated from persistence like validating some quantity with wrt data in persistence and it won't be efficient to store them in memory or prefetching them all.
Another valid example can be user creation with unique and valid domain email check, which will need uniqueness check from persistence
*
Case 1.
Doing validation in application layer:
If we call repository countryRepo.getCountryByCountryAlpha3Code() in application layer and then if the value is correct and valid part of system we can then pass the createValidEntity() and if not then can throw the error directly in application layer use-case.
Issue:
This validation will be repeated in multiple use-case if same validation need to be checked in other use-cases if its application layer concern
Here the business logic is now a part of application service layer
Case 2
Validating the country code in its value object class or domain service in Domain Layer
Doing this will keep business logic inside domain layer and also won't violate DRY principle.
import { ValueObject } from '#shared/core/domain/ValueObject';
import { Result } from '#shared/core/Result';
import { Utils } from '#shared/utils/Utils';
interface CountryAlpha3CodeProps {
value: string;
}
export class CountryAlpha3Code extends ValueObject<CountryAlpha3CodeProps> {
// Case Insensitive String. Only printable ASCII allowed. (Non-printable characters like: Carriage returns, Tabs, Line breaks, etc are not allowed)
get value(): string {
return this.props.value;
}
private constructor(props: CountryAlpha3CodeProps) {
super(props);
}
public static create(value: string): Result<CountryAlpha3Code> {
return Result.ok<CountryAlpha3Code>(new CountryAlpha3Code({ value: value }));
}
}
Is it good to call the repository from inside domain layer (Service
or VO (not recommended) ) then dependency flow will change?
If we trigger event how to make it synchronous?
What are some better ways to solve this?
export default class UseCaseClass implements IUseCaseInterface {
constructor(private readonly _repo: IRepo, private readonly countryCodeRepo: ICountryCodeRepo) {}
async execute(request: dto): Promise<dtoResponse> {
const someOtherKeyorError = KeyEntity.create(request.someOtherDtoKey);
const countryOrError = CountryAlpha3Code.create(request.country);
const dtoResult = Result.combine([
someOtherKeyorError, countryOrError
]);
if (dtoResult.isFailure) {
return left(Result.fail<void>(dtoResult.error)) as dtoResponse;
}
try {
// -> Here we are just calling the repo
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
const dataOrError = MyEntity.create({...request,
key: someOtherKeyorError.city.getValue(),
country: countryOrError.getValue(),
});
const commandResult = await this._repo.save(dataOrError.getValue());
return right(Result.ok<any>(commandResult));
} catch (err: any) {
return left(new AppError.UnexpectedError(err)) as dtoResponse;
}
}
}
In above application layer,
this part of code :
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
it it right to call the countryCodeRepo and fetch result or this part should be moved to domain service and then check the validity of the countryCode VO?
UPDATE:
After exploring I found this article by Vladimir Khorikov which seems close to what I was looking, he is following
As per his thoughts some domain logic leakage is fine, but I feel it will still keep the value object validation in invalid state if some other use case call without knowing that persistence check is necessary for that particular VO/entity creation.
I am still confused for the right approach
In my opinion, the conversion from String to ValueObject does not belong to the Business Logic at all. The Business Logic has a public contract that is invoked from the outside (API layer or presentation layer maybe). The contract should already expect Value Objects, not raw strings. Therefore, whoever is calling the business logic has to figure out how to obtain those Value Objects.
Regarding the implementation of the Country Code value object, I would question if it is really necessary to load the country codes from the database. The list of country codes very rarely changes. The way I've solved this in the past is simply hardcoding the list of country codes inside the value object itself.
Sample code in pseudo-C#, but you should get the point:
public class CountryCode : ValueObject
{
// Static definitions to be used in code like:
// var myCountry = CountryCode.France;
public static readonly CountryCode France = new CountryCode("FRA");
public static readonly CountryCode China = new CountryCode("CHN");
[...]
public static AllCountries = new [] {
France, China, ...
}
public string ThreeLetterCode { get; }
private CountryCode(string threeLetterCountryCode)
{
ThreeLetterCode = threeLetterCountryCode;
}
public static CountryCode Parse(string code)
{
[...] handle nulls, empties, etc
var exists = AllCountries.FirstOrDefault(c=>c.ThreeLetterCode==code);
if(exists == null)
// throw error
return exists;
}
}
Following this approach, you can make a very useful and developer-friendly CountryCode value object. In my actual solution, I had both the 2 and 3-letter codes and display names in English only for logging purposes (for presentation purposes, the presentation layer can look up the translation based on the code).
If loading the country codes from the DB is valuable for your scenario, it's still very likely that the list changes very rarely, so you could for example load a static list in the value object itself at application start up and then refresh it periodically if the application runs for very long.

DDD Entity and EntityType reference

I'm learning DDD and here is a problem I faced. I have two Aggregates (simplified):
class NoteType : AggregateRoot {
int noteTypeId
string name
string fields[]
... code omitted ...
}
class Note : AggregateRoot {
int noteId
int noteTypeId
map<str, str> fieldValues
setFieldValue(fieldName, fieldValue) {
// I want to check that fieldName is present in Notes.fields
// and later fieldValues[field.name] = fieldValue
}
... code omitted ...
}
I've heard that aggregates should reference to each other by ID's only. It this case I can't access NoteType.fields. I found several ways to do so, but not sure which one is better:
Pass NoteType instance into the Note model via constructor (do not reference by ID)
Use repository in setFieldValue to load NoteType
Use service which will do the check (this may cause all the Note logic to be implemented in this service, since Note highly dependent on NoteType)
What do you suggest?
What do you suggest?
Pass the information that the aggregate needs to the aggregate when it needs it.
setFieldValue(fieldName, fieldValue, noteType) {
// Now you have the data that you need to verify the noteType.fields
}
Sometimes, if you can't tell from outside the aggregate what information you need, then you instead pass the capability to look up that information
setFieldValue(fieldName, fieldValue, notes) {
// Use the provided capability to get what you need
noteType = notes.get(this.noteTypeId)
// the do the useful work
this.setFieldValue(fieldName, fieldValue, noteType)
}
Of course, if the only thing you need is the fields, then you might prefer to work only with that property:
setFieldValue(fieldName, fieldValue, fields)
Design is what we do, when we want to get more of what we want than we'd get by just doing it. -- Ruth Malan
In Domain Driven Design, a common "what we want" is to have the "business logic", meaning our implementation of the policies of information change that are important to our business, separated from the "plumbing" that describes how to read and store that information.

Why does JHipster generate Interfaces for Angular model objects?

Why does JHipster generate interfaces for each Angular model object?
e.g.
export interface IStudent {
id?: number;
studentIdentifier?: string;
firstName?: string;
lastName?: string;
}
export class Student implements IStudent {
constructor(
public id?: number,
public studentIdentifier?: string,
public firstName?: string,
public lastName?: string,
) {}
}
I cannot find the original discussion but in my understanding, this is because of the way how interfaces work in TypeScript, which is a little different than in Java. They not just describe how a class should look like by defining its method, but also which fields should be present. So you can define, how a JSON from somewhere should look like. Like a POJO. Or a POTO (plain old TypeScript object) :)
By example, you could do that:
let student: IStudent = { id: 123, studentIdentifier: '...',...}
and TS would check if your provided object satisfies the defined structure of student. When you get an object out from the API, you just map a JSON directly this way, so there is no class in between. From the other side, it's more handy to work with classes rather than interfaces, when building objects of IStudent directly. As it also satisfies IStudent, you can make just
let student: IStudent = new Student(123, '...', ..)
which is shorter.
You could rely also on my first snippet (this is how ionic does it, btw. Using interfaces as POJOs/POTOs). Using classes only in TS leads to a bad developer experience IMHO.
Hope that helps a little bit out

Avoiding storage concerns in entities, with a complex database

The project I'm working on deals with quite complex business rules, so I'm trying to apply DDD. Unfortunately, I have to work with a legacy database I cannot get rid of, and I'm having trouble keeping a clean Domain Design.
Lets say some Entity, has some ValueType as primary key, which is required. This could be designed in DDD like the following:
public class Entity
{
public Entity(ValueType key)
{
Key = key;
}
public ValueType Key { get; }
}
Now, lets say this key is actually stored as a string representation, which can be parsed to construct the ValueType. I could do something like this, to make it work with Entity Framework:
public class Entity
{
private Entity()
{
//Private empty ctor for EF
}
public Entity(ValueType key)
{
StoredKey = key.ToString();
}
public ValueType Key => ValueType.Parse(StoredKey);
//DB representation of the key, setter for EF
private string StoredKey { get; set; }
}
This way, I feel I'm kind of polluting my Domain Design with storage concerns. For what the Domain cares, the Entity could be persisted just in memory, so this string internal representation feels weird.
This is a very simple scenario to show an example, but things can actually get really worse. I would like to know if there is any way to achieve persistance ignorance in the model with this simple example, so I can start thinking later about how to design more complex scenarios.
The domain model doesn't need to follow the entity framework structure. What you can do is to create 2 types of models. One pure domain models and when passing it to the repository to persist it transform it into entity framework model. And when fetching the model you can do the inverse transformation.
You can achieve persistance ignorance in this instance. Your instincts are right, get rid of all persistance concerns from your domain model, move them entirely within your dal where they belong.
DB.sql:
create table entity {
id nvarchar(50) not null primary key,
fields nvarchar(max) /*Look mum, NoSql inside sql! (please dont do this) */
}
Domain.dll:
class Entity {
/*optional - you are going to need some way of 'restoring' a persisted domain entity - how you do this is up to your own conventions */
public Entity(ValueType key, ValueObjects.EntityAttributes attributes) {Key=key;Attributes=attributes;}
public ValueType Key { get; }
public ValueObjects.EntityAttributes Attributes { get; }
/* domain functions below */
}
IEntityRepository {
public Update(Domain.Entity enity);
public Fetch(ValueType Key);
}
now ALL persistance work can go in your DAL, includeing the translation. I havent done EF in a while so treat the below
as sudo code only.
DAL (EF):
/* this class lives in your DAL, and can be private, no other project needs to know about this class */
class Entity :{
public string EntityId {get;set;}
public string Fields {get;set;}
}
class EntityRepository : BaseRepository, Domain.IEntityRepository {
public EntityRepository(DBContext context) {
base.Context = context;
}
public Domain.Entity Fetch(ValueType key) {
string id = key.ToString();
var efEntity = base.Context.Entitys.SingleOrDefault(e => e.Id == id);
return MapToDomain(efEntity);
}
/*Note: Handle mapping as you want, this is for example only*/
private Domain.Entity MapToDomain(EF.Entity efEntity) {
if (efEntity==null) return null;
return new Domain.Entity(
ValueType.Parse(efEntity.Id),
SomeSerializer.Deserialize<ValueObjects.EntityAttributes>(efEntity.Fields) /*every time you do this, a puppy hurts its paw*/
);
}
public Domain.Entity Update(Domain.Entity domainEntity) {
string id = key.ToString();
var efEntity = MapToEf(domainEntity);
base.Context.Entities.Attach(efEntity);
base.Context.Entity(efEntity).State=EntityState.Modified;
base.Context.SaveChanges();
}
private Domain.Entity MapToEf(Domain.Entity domainEntity) {
return new EF.Entity(
Id = domainEntity.Key.ToString(),
Fields = SomeSerializer.Serialize(domainEntity.Attributes) /*stahp!*/
);
}
}
The takeaway thing here is that you are going to need to do Mapping of some sort. This all but unavoidable unless your domain is realy simple and your ORM is super fancy, but even then I would recommend keeping your ORM models seperate to your Domain models because they solving 2 different problems (ORMS are providing a code version of your database model, DDD are providing a code version of you Business Models). If you are compromising your Domain Model (ie, making properties public set ) to cater for your DAL then step back and re evaluate. Obviously compromise where appropriate but realise this means you are introducing (implied) dependancies across your application layers.
You next quetion in realtion to performance (but mapping is so slow) was answered by Constantin Galbenu, have seperate 'read' models and reposistories for lists, searches. Do you really need to pull back 1000's of business models just to populate a search result list (and then have the tempation to add properties of no concern to the business model because 'the search page needs this one bit of data for the finaince people'). You should only be pulling out our domain model when you are doing some sort of business action, otherwise some nice anemica read only views are your friend.
As many suggested in the comments, CQRS is a good choice for complex business rules. It has the great advantage that you can have different models for each side (write/command and read/query). In this way you separate the concerns. This is also very good because the business logic for the write side differs from the read side's but enough with the advantages of CQRS.
...Unfortunately, I have to work with a legacy database I cannot get rid of...
Your new Write model, the Aggregate, will be responsible for handling commands. This means that the legacy model will be relieved of this responsibility; it will be used only for queries. And to keep it up-to-date you can create a LegacyReadModelUpdater that is subscribed to all Domain events generated by the new Aggregate and it will project them to the old model in an eventually consistent manner.

Resources