I'm using bigchain db in our project. Currently we are using transaction model to create assets and transfer it. But now we want to implement block model. When I go through the documentation I can't find how to create a block? Is there any specific url or any function or do we have to use same url as like for transaction model for example http://ourserver.com:8080/api/v1/ ? Can any one help me in this. Thank you.
As with Bitcoin, one submits a transaction to a BigchainDB network and then it's up to the network (i.e. the nodes in the network) to put that transaction in a block (or not, if the transaction is invalid).
Related
I am trying to create a chaincode with different assets types.
Imagine that I have a chaincode where I store the users created and also the transactions where the users receive points.
How can I create a chaincode in a way that I am able to queryAllUsers and queryAllPointsTransactions? without using lists as it is available in this github https://github.com/IBM/customer-loyalty-program-hyperledger-fabric-VSCode
Because when using lists we have problems with multiple clients and with multiple transactions at the same time.
Does anyone can help me on this?
Thanks a lot!
PutState the asset separately from each key, and use the function below.
GetQueryResult
GetStateByRange
I am quite new to Jhipster and have problem understanding some of its functionalities. Hence here is my question.
I have the following two microservices.
Microservice 1 (MS1) has the following data structures in Java:
Lead {
Customer customer;
Deal deal;
}
Customer{
Integer phoneNumber;
etc...
}
Deal{
Integer value;
etc...
}
Microservice 2 (MS2) is a JHipster generated database.
The DB only has the following SQL tables :
CUSTOMER
LEAD
When changes happen in Microservice 1, I send 2 separate PUT requests from MS1 to MS2.
first a request to update CUSTOMER through the /customer API in MS2
if update is OK, then send a request to update DEAL /deal API in MS2
For a successful update for Lead, PUT requests to Customer, Deal should all be OK. If updating one table fails, all should fail.
Hence, I would like to avoid sending 2 separate requests to avoid a case where CUSTOMER request is OK and DEAL request fails for whatever reason.
If possible, I would like to send one single transaction throught an API such as /lead that udpates the two table..
What is the best way I can achieve this without creating an extra table for LEAD?
e.g., a layer/service that I should generate using Jhipster.
If possible (but not necessary), I would like to avoid touching code that are frequently regenerated. (e.g., Customer, Deal)
Please kindly direct me to a documentation too if one already exist. They are quite hard to understand so I am not sure if any current one specifically addresses this problem. Thank you.
This is a common issue when directly exposing JPA entities from a CRUD REST API.
Your persistence model does not need to be your API model.
If 2 entities are related and should be updated within same transaction, it means that they should be updated with one atomic API request.
So, you could define a new resource with a DTO combining your 2 entities, exposed by a new API that you would code manually (so no need for an additional table).
As you are using microservices architecture, you could have similar situation also between MS1 and MS2 and here you could not use a transaction, you could then have to implement remediation.
I want to make a webservice and it looks like Loopback is good starting point.
To explain my question, I will describe situation
I have 2 MySQL Tables:
Users
Companies
Every User has it's Company. It's like master User for it's company.
I wish to create Products table for each company next way:
company1_Products,
company2_Products,
company3_Products
Each company have internal Users, like:
company1_Users,
company2_Users,
company3_Users
Internal users are logging in from corresponding subdomain, like
company1.myservice.com
company2.myservice.com
For the API, I want datasource to get Products from the corresponding table. So the question is, how to change datasource dynamically?
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
Maybe there's better way to do such models?
Disclaimer: I am co-author and one of current maintainers of LoopBack.
how to change datasource dynamically?
The following StackOverflow answer describes a solution how to attach a single model (e.g. Product) to multiple datasources: https://stackoverflow.com/a/28327323/69868 This solution would work if you were creating one MySQL database per company instead of using company's name as the prefix of Product table name.
To achieve what you described, you can use model subclassing. For each company, define a new company-specific Product model inheriting from the shared Product model and changing the table name.
// common/models/company1-product.json
{
"name": "Company1_Product",
"base": "Product",
"mysql": {
"tableName": "company1_Products"
}
// etc.
}
You can even create these models on the fly using app.registry.createModel() and app.model() APIs, and then run dataSource.autoupdate to create SQL tables for the new model(s).
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
I suppose you can use the same approach as you do for Products and as you described in your question.
Maybe there's better way to do such models?
The problem you are facing is calling multi-tenancy. I am afraid we haven't figured out an easy to use solution yet. There are many possible ways how to implement multi-tenancy.
For example, you can create one LoopBack application for each Company (tenant) and then create a top-level LoopBack or Express application to route incoming requests to appropriate tenant-specific LB app instance. See the following repository for a proof-of-concept implementation: https://github.com/strongloop/loopback-multitenant-poc
Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.
Lets suppose I have a basic CustomerEntity which has the following attributes
Name
Surname
IsPreferred
Taking CQRS in it's simplest form I would have the following services
CustomerCommandService
CustomerQueryService
If on the CustomerCommandService I call UpgradeToPreferred(CustomerEntity) The store behind it will update and any queries will reflect this. So far so good.
My question is how to I sync this back to the local entity I have? I have called the UpgradeToPreferred() method on the service not on the entity so it will not reflect in the local copy unless I query the CustomerQueryService and get the update which seems a tad redundant.
..Or am I doing it wrong?
EDIT:
To clarify, the question is. If I am going through a command service to modify the entity in storage and not calling the command on the entity directly or editing it's properties how should I handle the same modification on the entity I have in memory.
Few things wrong here. Your comand service takes a command, not an entity. So if you want to upgrade that customer to be preferred, then the command would be the intent (makecustomerpreferred) and the data needed to perfomr the command (a customer identification would suffice). The service would load up the entity using the identification, and invoke the makepreferred behavior on the entity. The entity would be changed internally. Persistence would map it back to the database. Ergo, no need to resync with the database.