On Hyperledger Fabric Context - hyperledger-fabric

I would appreciate some help with this small issue.
According to the comercial paper smart contract, present in Fabric-samples, one can define a custom Context, which allows handling logic across different transactions.
Can we define a variable in a Context, initialized as 0, and increment it at each transaction? I do not seem able to increment it, i.e., the counter always resets at each transation:
class CustomContext extends Context {
constructor() {
super();
this.comercialPaperList = new PaperList(this);
this.counter = 0;
}
generateLogId() {
return this.numberLogs++;
}
getLatestLogId() {
return this.numberLogs;
}
}
class MyContract extends Contract {
constructor() {
// Unique namespace when multiple contracts per chaincode file
super('...');
}
/**
* Define a custom context for a citius log
*/
createContext() {
return new CustomContext();
I have a test transaction which increments the counter:
async incrementC (ctx) {
let before = await ctx.getLatestLogId();
console.log(before);
let new = await ctx.generateLogId();
console.log(new);
console.log("============== after inc")
let after = await ctx.getLatestLogId();
console.log(after);
}
On the first time I execute the incrementC transaction, I obtain 0, 1, 1, as expected.
On the following times, I obtain exacly the same, as if context did not store the updates.
Any insights?

A Custom context has the same scope as a non custom context, it is the context for the currently executing transaction only. It is not able to span across multiple transaction requests. If the documentation implies this then I would suggest that there is something wrong with the documentation and a jira should be raised at https://jira.hyperledger.org to raise this as an issue.
So unfortunately what you are trying to do will not work.

Related

DDD : Business Logic which need infra layer access should be in application service layer, domain service or domain objects?

For an attribute which need to be validated, lets say for an entity we have country field as VO
This country field needs to be validated to be alpha-3 code as per some business logic required by domain expert.
NOTE:
*We need to persist this country data as it can have other values also and possible in future there can be addition, updating and deleting of the country persisted data.
This is just one example using country code which may rarely change, there can be other fields which needs to be validated from persistence like validating some quantity with wrt data in persistence and it won't be efficient to store them in memory or prefetching them all.
Another valid example can be user creation with unique and valid domain email check, which will need uniqueness check from persistence
*
Case 1.
Doing validation in application layer:
If we call repository countryRepo.getCountryByCountryAlpha3Code() in application layer and then if the value is correct and valid part of system we can then pass the createValidEntity() and if not then can throw the error directly in application layer use-case.
Issue:
This validation will be repeated in multiple use-case if same validation need to be checked in other use-cases if its application layer concern
Here the business logic is now a part of application service layer
Case 2
Validating the country code in its value object class or domain service in Domain Layer
Doing this will keep business logic inside domain layer and also won't violate DRY principle.
import { ValueObject } from '#shared/core/domain/ValueObject';
import { Result } from '#shared/core/Result';
import { Utils } from '#shared/utils/Utils';
interface CountryAlpha3CodeProps {
value: string;
}
export class CountryAlpha3Code extends ValueObject<CountryAlpha3CodeProps> {
// Case Insensitive String. Only printable ASCII allowed. (Non-printable characters like: Carriage returns, Tabs, Line breaks, etc are not allowed)
get value(): string {
return this.props.value;
}
private constructor(props: CountryAlpha3CodeProps) {
super(props);
}
public static create(value: string): Result<CountryAlpha3Code> {
return Result.ok<CountryAlpha3Code>(new CountryAlpha3Code({ value: value }));
}
}
Is it good to call the repository from inside domain layer (Service
or VO (not recommended) ) then dependency flow will change?
If we trigger event how to make it synchronous?
What are some better ways to solve this?
export default class UseCaseClass implements IUseCaseInterface {
constructor(private readonly _repo: IRepo, private readonly countryCodeRepo: ICountryCodeRepo) {}
async execute(request: dto): Promise<dtoResponse> {
const someOtherKeyorError = KeyEntity.create(request.someOtherDtoKey);
const countryOrError = CountryAlpha3Code.create(request.country);
const dtoResult = Result.combine([
someOtherKeyorError, countryOrError
]);
if (dtoResult.isFailure) {
return left(Result.fail<void>(dtoResult.error)) as dtoResponse;
}
try {
// -> Here we are just calling the repo
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
const dataOrError = MyEntity.create({...request,
key: someOtherKeyorError.city.getValue(),
country: countryOrError.getValue(),
});
const commandResult = await this._repo.save(dataOrError.getValue());
return right(Result.ok<any>(commandResult));
} catch (err: any) {
return left(new AppError.UnexpectedError(err)) as dtoResponse;
}
}
}
In above application layer,
this part of code :
const isValidCountryCode = await this.countryCodeRepo.getCountryCodeByAlpha2Code(countryOrError.getValue()); // return boolean value
if (!isValidCountryCode) {
return left(new ValidCountryCodeError.CountryCodeNotValid(countryOrError.getValue())) as dtoResponse;
}
it it right to call the countryCodeRepo and fetch result or this part should be moved to domain service and then check the validity of the countryCode VO?
UPDATE:
After exploring I found this article by Vladimir Khorikov which seems close to what I was looking, he is following
As per his thoughts some domain logic leakage is fine, but I feel it will still keep the value object validation in invalid state if some other use case call without knowing that persistence check is necessary for that particular VO/entity creation.
I am still confused for the right approach
In my opinion, the conversion from String to ValueObject does not belong to the Business Logic at all. The Business Logic has a public contract that is invoked from the outside (API layer or presentation layer maybe). The contract should already expect Value Objects, not raw strings. Therefore, whoever is calling the business logic has to figure out how to obtain those Value Objects.
Regarding the implementation of the Country Code value object, I would question if it is really necessary to load the country codes from the database. The list of country codes very rarely changes. The way I've solved this in the past is simply hardcoding the list of country codes inside the value object itself.
Sample code in pseudo-C#, but you should get the point:
public class CountryCode : ValueObject
{
// Static definitions to be used in code like:
// var myCountry = CountryCode.France;
public static readonly CountryCode France = new CountryCode("FRA");
public static readonly CountryCode China = new CountryCode("CHN");
[...]
public static AllCountries = new [] {
France, China, ...
}
public string ThreeLetterCode { get; }
private CountryCode(string threeLetterCountryCode)
{
ThreeLetterCode = threeLetterCountryCode;
}
public static CountryCode Parse(string code)
{
[...] handle nulls, empties, etc
var exists = AllCountries.FirstOrDefault(c=>c.ThreeLetterCode==code);
if(exists == null)
// throw error
return exists;
}
}
Following this approach, you can make a very useful and developer-friendly CountryCode value object. In my actual solution, I had both the 2 and 3-letter codes and display names in English only for logging purposes (for presentation purposes, the presentation layer can look up the translation based on the code).
If loading the country codes from the DB is valuable for your scenario, it's still very likely that the list changes very rarely, so you could for example load a static list in the value object itself at application start up and then refresh it periodically if the application runs for very long.

the right way to return a Single from a CompletionStage

I'm playing around with reactive flows using RxJava2, Micronaut and Cassandra. I'm new to rxjava and not sure what is the correct way to return a of List Person in the best async manner?
data is coming from a Cassandra Dao interface
public interface PersonDAO {
#Query("SELECT * FROM cass_drop.person;")
CompletionStage<MappedAsyncPagingIterable<Person>> getAll();
}
that gets injected into a micronaut controller
return Single.just(personDAO.getAll().toCompletableFuture().get().currentPage())
.subscribeOn(Schedulers.io())
.map(people -> HttpResponse.ok(people));
OR
return Single.just(HttpResponse.ok())
.subscribeOn(Schedulers.io())
.map(it -> it.body(personDAO.getAll().toCompletableFuture().get().currentPage()));
OR switch to RxJava3
return Single.fromCompletionStage(personDAO.getAll())
.map(page -> HttpResponse.ok(page.currentPage()))
.onErrorReturn(throwable -> HttpResponse.ok(Collections.emptyList()));
Not a pro of RxJava nor Cassandra :
In your first and second example, you are blocking the thread executing the CompletionStage with get, even if you are doing it in the IO thread, I would not recommand doing so.
You are also using a Single wich can emit, only one value, or an error. Since you want to return a List, I would sugest to go for at least an Observable.
Third point, the result from Cassandra is paginated, I don't know if it's intentionnaly but you list only the first page, and miss the others.
I would try a solution like the one below, I kept using the IO thread (the operation may be costly in IO) and I iterate over the pages Cassandra fetch :
/* the main method of your controller */
#Get()
public Observable<Person> listPersons() {
return next(personDAO.getAll()).subscribeOn(Schedulers.io());
}
private Observable<Person> next(CompletionStage<MappedAsyncPagingIterable<Person>> pageStage) {
return Single.fromFuture(pageStage.toCompletableFuture())
.flatMapObservable(personsPage -> {
var o = Observable.fromIterable(personsPage.currentPage());
if (!personsPage.hasMorePages()) {
return o;
}
return o.concatWith(next(personsPage.fetchNextPage()));
});
}
If you ever plan to use reactor instead of RxJava, then you can give cassandra-java-driver-reactive-mapper a try.
The syntax is fairly simple and works in compile-time only.

Test the existence of a dynamically chosen class

I have data coming from an external source that I want to process. In order to do that, the objects I'm receiving are tagged with their original class name. Now I want to take that tag name and use it to populate a model in my own application. I'm stuck at the step where I check for that class having an equivalent in my codebase. Its going to look something like this:
this.objects.forEach((object) => {
if (typeof object.class_tag !== 'undefined') { //the problem line
//create class instance
}
});
In php I'd simply call class_exists to achieve this
<?php
if (class_exists($object->class_tag)) {}
What is the correct approach here?
I don't see the clear way to do this in a just one line.
One of the possible approaches is the way you register your existing classes.
For example if you use some kind of a namespace later on you can simply check the class for existance in the namespace.
Sample code:
class A {}
const a = "A"
const namespace = { A };
if (namespace[a]) {
// class exists, you can create object
const instance = new namespace[a]();
}
Probably, much better approach would be to make some service, that will registerClass, checkClass and createInstance for you. So your logic is wrapped in one place.
I found a way of doing it
(credit to https://stackoverflow.com/a/34656123/746549)
let logger = require('../javascripts/serverlog');
let util = require('util');
let Extension = require('../models/object/Extension');
const classes = {Extension: Extension};
/**
* Utility to emulate class exists / dynamic class naming
* #param className
* #returns {*}
*/
module.exports.dynamicClass = (className) => {
logger.debug(classes);
logger.debug(className);
if (classes[className]) {
return classes[className];
} else {
return false;
}
};
Usage:
let ClassOrFalse = dynamicClass.dynamicClass(object._class_tag);

passing around NSManagedObjects

I get strange errors when I am trying to pass around NSManagedObject through several functions. (all are in the same VC).
Here are the two functions in question:
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
let savedValues = item.dictionaryWithValuesForKeys([
"score",
"progress",
"player"])
doUpload(savedParams) { //do a POST request using params with Alamofire
(success) in
if success {
completionHandler(item,.Success)
} else {
completionHandler(item,.Failure)
}
}
}
func getSavedScores() {
do {
debugPrint("TRYING TO FETCH LOCAL SCORES")
try frc.performFetch()
if let results = frc.sections?[0].objects as? [NSManagedObject] {
if results.count > 0 {
print("TOTAL SCORE COUNT: \(results.count)")
let incomplete = results.filter({$0.valueForKey("success") as! Bool == false })
print("INCOMPLETE COUNT: \(incomplete.count)")
let complete = results.filter({$0.valueForKey("success") as! Bool == true })
print("COMPLETE COUNT: \(complete.count)")
if incomplete.count > 0 {
for pendingItem in incomplete {
self.syncScoring(pendingItem) {
(returnItem,response) in
let footest = returnItem.valueForKey("player") //only works if stripping syncScoring blank
switch response { //response is an enum
case .Success:
print("SUCCESS")
case .Duplicate:
print("DUPLICATE")
case .Failure:
print("FAIL")
}
}
} //sorry for this pyramid of doom
}
}
}
} catch {
print("ERROR FETCHING RESULTS")
}
}
What I am trying to achieve:
1. Look for locally saved scores that could not submitted to the server.
2. If there are unsubmitted scores, start the POST call to the server.
3. If POST gets 200:ok mark item.key "success" with value "true"
For some odd reason I can not access returnItem at all in the code editor - only if I completely delete any code in syncLocal so it looks like
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
completionHandler(item,.Success)
}
If I do that I can access .syntax properties in the returning block down in the for loop.
Weirdly if I paste the stuff back in, in syncLocal the completion block keeps being functional, the app compiles and it will be executed properly.
Is this some kind of strange XCode7 Bug? Intended NSManagedObject behaviour?
line 1 was written with stripped, line 2 pasted rest call back in
There is thread confinement in Core Data managed object contexts. That means that you can use a particular managed object and its context only in one and the same thread.
In your code, you seem to be using controller-wide variables, such as item. I am assuming the item is a NSManagedObject or subclass thereof, and that its context is just one single context you are using in your app. The FRC context must be the main thread context (a NSManagedObjectContext with concurrency type NSMainThreadConcurrencyType).
Obviously, the callback from the server request will be on a background thread. So you cannot use your managed objects.
You have two solutions. Either you create a child context, do the updates you need to do, save, and then save the main context. This is a bit more involved and you can look for numerous examples and tutorials out there to get started. This is the standard and most robust solution.
Alternatively, inside your background callback, you simply make sure the context updates occur on the main thread.
dispatch_async(dispatch_get_main_queue()) {
// update your managed objects & save
}

.net 4.0 Tasks: Synchronize on one or more objects

I have read a lot about the new Task functionality in .net 4.0, but I haven't found a solution for the following problem:
I am writing a server application that processes requests from many users and I want to use Tasks to distribute these request on multiple cores. However, these Tasks should be synchronized on objects - for the beginning, users -, so that just one task is processed for each object at a time. This would be simple to achieve with Task.ContinueWith(), but it should also be possible to synchonize a task on multiple objects (e.g. when a user transfers money to another user, a variable should be decremented at user A and incremented at user B without other tasks interfering).
So, my first attempt is a class that receives delegates, creates tasks and stores them in a dictionary with the objects to sync on as keys. If a new task is scheduled, it can be appended to the last task of the given object with Task.ContinueWith(). If it should be synchronized on multiple objects, the new Task is created using TaskFactory.ContinueWhenAll(). The created task is stored in the dictionary for every object it is synchronized on.
Here is my first draft:
public class ActionScheduler:IActionScheduler
{
private readonly IDictionary<object, Task> mSchedulingDictionary = new Dictionary<object, Task>();
private readonly TaskFactory mTaskFactory = new TaskFactory();
/// <summary>
/// Schedules actions synchonized on one or more objects. Only one action will be processed for each object at any time.
/// </summary>
/// <param name="synchronisationObjects">Array of objects the current action is synchronized on</param>
/// <param name="action">The action that will be scheduled and processed</param>
public void ScheduleTask(object[] synchronisationObjects, Action action)
{
// lock the dictionary in case two actions are scheduled on the same object at the same time
// this is necessary since reading and writing to a dictionary can not be done in an atomic manner
lock(mSchedulingDictionary)
{
// get all current tasks for the given synchronisation objects
var oldTaskList = new List<Task>();
foreach (var syncObject in synchronisationObjects)
{
Task task;
mSchedulingDictionary.TryGetValue(syncObject, out task);
if (task != null)
oldTaskList.Add(task);
}
// create a new task for the given action
Task newTask;
if (oldTaskList.Count > 1)
{
// task depends on multiple previous tasks
newTask = mTaskFactory.ContinueWhenAll(oldTaskList.ToArray(), t => action());
}
else
{
if (oldTaskList.Count == 1)
{
// task depends on exactly one previous task
newTask = oldTaskList[0].ContinueWith(t => action());
}
else
{
// task does not depend on any previous task and can be started immediately
newTask = new Task(action);
newTask.Start();
}
}
// store the task in the dictionary
foreach (var syncObject in synchronisationObjects)
{
mSchedulingDictionary[syncObject] = newTask;
}
}
}
}
This even works if a task "multiSyncTask" was created for multiple objects, and afterwards tasks for each of the objects are scheduled. Since they are all created with multiSyncTask.ContinueWith(), they start synchronously:
static void Main()
{
IActionScheduler actionScheduler = new ActionScheduler();
var syncObj1 = new object();
var syncObj2 = new object();
// these two start and complete simultaneously:
actionScheduler.ScheduleTask(new[] { syncObj1 }, () => PrintTextAfterWait("1"));
actionScheduler.ScheduleTask(new[] { syncObj2 }, () => PrintTextAfterWait("2"));
// this task starts after the first two and "locks" both objects:
actionScheduler.ScheduleTask(new[] { syncObj1, syncObj2 }, () => PrintTextAfterWait("1 and 2"));
// these two - again - start and complete simultaneously after the task above:
actionScheduler.ScheduleTask(new[] { syncObj1 }, () => PrintTextAfterWait("1"));
actionScheduler.ScheduleTask(new[] { syncObj2 }, () => PrintTextAfterWait("2"));
}
static void PrintTextAfterWait(string text)
{
Thread.Sleep(3000);
Console.WriteLine(text);
}
What do you think - is this a good solution for my problem? I am a bit sceptic about the big lock on the dictionary, but it is necessary in case two tasks are scheduled on one object at once to prevent race conditions. Of course, the dictionary is just locked for the time it takes to create a task, not when it is processed.
Also, I would love to know if there are any already existing solutions or coding paradigms out there that solve my problem better using .net 4.0 Tasks that I have failed to track down.
Thank you and with best regards,
Johannes
If I got you right.. you would like to have a Task.ContinueWith(task1, task2, lambda)?
Something like the Join arbiter in CCR?
http://msdn.microsoft.com/en-us/library/bb648749.aspx
If so, probably the most elegant option is to use the JoinBlock in TPL dataflow (http://www.microsoft.com/download/en/confirmation.aspx?id=14782).
Or, maybe, have you tried to do use Task.WaitAll() as the first instruction of your dependent task?

Resources