Hyperledger Fabric Asset vs Transaction - hyperledger-fabric

What is the difference between add/update assets directly vs add/update assets through transaction. Which is the right way to do?
We can add/update assets using API then why we perform transaction on assets? Is this like a log/trace to track what and when things updated on assets? What If I made changes in assets directly without any transaction entries? or should I do changes through transactions on assets?
For Example:
asset Myasset identified by asset_id {
o String asset_id
--> User admin
o String firstname
o String lastname
o Boolean status default=true
}
transaction UpdateMyasset {
o Myasset certificate
o Boolean status default=true
}
So should I use UpdateMyasset to create assets or directly use Myasset? When should I use transaction?

this is actually a very good question.
you are correct, there are two ways to create / update assets and that part has never sit right with me. The API, by default, gives you endpoints to create / update assets.
In my mind that is a problem, because it's bypassing any kind of business rules you might have. These rules can be specified and applied however via a transaction. At that point you can check your asset, make sure everything is good and if all business rules are fulfilled, then you can create / update your asset.
What I think should be done in a serious, non POC kind of application, is to completely disable the asset endpoints and only create / update assets via transactions and only once the business rules have been applied and your assets properly validated.

Related

Send a PUT/GET/POST request to JHipster in one single transaction

I am quite new to Jhipster and have problem understanding some of its functionalities. Hence here is my question.
I have the following two microservices.
Microservice 1 (MS1) has the following data structures in Java:
Lead {
Customer customer;
Deal deal;
}
Customer{
Integer phoneNumber;
etc...
}
Deal{
Integer value;
etc...
}
Microservice 2 (MS2) is a JHipster generated database.
The DB only has the following SQL tables :
CUSTOMER
LEAD
When changes happen in Microservice 1, I send 2 separate PUT requests from MS1 to MS2.
first a request to update CUSTOMER through the /customer API in MS2
if update is OK, then send a request to update DEAL /deal API in MS2
For a successful update for Lead, PUT requests to Customer, Deal should all be OK. If updating one table fails, all should fail.
Hence, I would like to avoid sending 2 separate requests to avoid a case where CUSTOMER request is OK and DEAL request fails for whatever reason.
If possible, I would like to send one single transaction throught an API such as /lead that udpates the two table..
What is the best way I can achieve this without creating an extra table for LEAD?
e.g., a layer/service that I should generate using Jhipster.
If possible (but not necessary), I would like to avoid touching code that are frequently regenerated. (e.g., Customer, Deal)
Please kindly direct me to a documentation too if one already exist. They are quite hard to understand so I am not sure if any current one specifically addresses this problem. Thank you.
This is a common issue when directly exposing JPA entities from a CRUD REST API.
Your persistence model does not need to be your API model.
If 2 entities are related and should be updated within same transaction, it means that they should be updated with one atomic API request.
So, you could define a new resource with a DTO combining your 2 entities, exposed by a new API that you would code manually (so no need for an additional table).
As you are using microservices architecture, you could have similar situation also between MS1 and MS2 and here you could not use a transaction, you could then have to implement remediation.

How to create a new block in bigchain db?

I'm using bigchain db in our project. Currently we are using transaction model to create assets and transfer it. But now we want to implement block model. When I go through the documentation I can't find how to create a block? Is there any specific url or any function or do we have to use same url as like for transaction model for example http://ourserver.com:8080/api/v1/ ? Can any one help me in this. Thank you.
As with Bitcoin, one submits a transaction to a BigchainDB network and then it's up to the network (i.e. the nodes in the network) to put that transaction in a block (or not, if the transaction is invalid).

Fetching Initial Data from CloudKit

Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)
Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

Syncing Local Domain Entity Changes When Using CQRS

Lets suppose I have a basic CustomerEntity which has the following attributes
Name
Surname
IsPreferred
Taking CQRS in it's simplest form I would have the following services
CustomerCommandService
CustomerQueryService
If on the CustomerCommandService I call UpgradeToPreferred(CustomerEntity) The store behind it will update and any queries will reflect this. So far so good.
My question is how to I sync this back to the local entity I have? I have called the UpgradeToPreferred() method on the service not on the entity so it will not reflect in the local copy unless I query the CustomerQueryService and get the update which seems a tad redundant.
..Or am I doing it wrong?
EDIT:
To clarify, the question is. If I am going through a command service to modify the entity in storage and not calling the command on the entity directly or editing it's properties how should I handle the same modification on the entity I have in memory.
Few things wrong here. Your comand service takes a command, not an entity. So if you want to upgrade that customer to be preferred, then the command would be the intent (makecustomerpreferred) and the data needed to perfomr the command (a customer identification would suffice). The service would load up the entity using the identification, and invoke the makepreferred behavior on the entity. The entity would be changed internally. Persistence would map it back to the database. Ergo, no need to resync with the database.

Resources