I am interested in seeing all of the transactions that take place on a Binance Chain BNB address (bnb1...)
I see that I can use this API to get most of the transactions that have taken place:
https://dex.binance.org/api/v1/transactions?address=
But one kind of transaction is left out. If I swap BNB from Binance Chain to Binance Smart Chain (BC to BSC) on TrustWallet, say, the transaction shows up as a "Crosschain transfer out", accounting for the coins leaving. But if I swap BNB from BSC to BC, the transaction does not show up at all. I was expecting something like a "Crosschain transfer in" to account for the BNB coin coming in.
Is there a good way to query for these sorts of BSC to BC transactions?
My current solution is to query all of the transactions in the 11 addresses that perform transactions of type "Oracle Claim", taken from here (https://github.com/binance-chain/node-binary/blob/master/fullnode/prod/0.5.8/config/genesis.json) :
bnb1kdx4xkktr35j2mpxncvtsshswj5gq577me7lx4 (Aconcagua)
bnb1slq53dua0nj3e6y949u4yc3erus0t68k37jcwh (Ararat)
bnb139l5umk42mam3znr568gw706fwvp485kw5zks3 (Carrauntoohil)
(etc...)
And searching each transaction's data to see if any involve my address.
However, there are a large number of transactions to search through, so this seems unfeasible. I am guessing there is a better way?
Related
I am currently participating in the hyperledger fabric project.
The concept of hyperledger fabric is still insufficient. I understand that the transaction is stored in the block. The transaction knows that the chaincode is executed.
For example, suppose you executed send chaincode. Suppose A sends 500 to B with A = 1000 and B = 1000. It will be A = 500 and B = 1500. Suppose this is TxId = AAA.
I want to see AAA's history of "A sent 500 to B" in this situation. Mychannel.block and mychannel in the channel-artifact directory created by running the current network.I tried to decode tx to json file.
However, it was found that there was no content related to this. Is there any way I can see the contents of TxId=AAA?
Decode .tx and .block file. but I didn't get what I wanted.
If you want to see the history of transactions, you can use the ctx.stub.getHistoryForKey(id) function, where the id parameter is a record key. This is the Node.js SDK method, I expect that it is similarly named for Java and Go. I think that the information that you require should be held in the contract, as the history only returns the different contract versions over time. If you want to see that A transacted with B, you would need the contract code to show that funds came from A and landed with B during the transfer. Depending on implementation, this might require a cross-contract call to a different contract (one containing clients A and B) so that 500 could be taken from Account A's fund and added to Account B's fund. In this scenario (if we are talking about a sale), the AssetTransfer contract could show the change of ownership, whereas the client contract will show 2 updates, one where A's fund decreases by 500 and another where B's fund increases by 500.
In the scenario above, there are now three updates that have history i.e. an asset sale, which you don't mention, but I am using as an example which will have a change of ownership history. A client A, whose fund record will have decreased, and a client B, who will have a corresponding increase in funds. Therefore, it's not a block history that you require, but a history of the Client records for A and B. Even if you only had a single contract e.g. (Client), and you exchanged funds, you will still have two updates, one for A and the other for B. It's the records within the contract code that change. The block is the manifestation of the entire transaction i.e. all rules have been satisfied by the different peers and whatever concensus policy is in place.
I'm trying to design a double-entry ledger with DDD and running into some trouble with defining aggregate roots. There are three domain models:
LedgerLine: individual line items that have data such as amount, timestamp they are created at, etc.
LedgerEntry: entries into the ledger. Each entry contains multiple LedgerLines where the debit and credit lines must balance.
LedgerAccount: accounts in the ledger. There are two types of accounts: (1) internal accounts (e.g. cash) (2) external accounts (e.g. linked bank accounts). External accounts can be added/removed.
After reading some articles online (e.g. this one: https://lorenzo-dee.blogspot.com/2013/06/domain-driven-design-accounting-domain.html?m=0). It seems like LedgerEntry should be one aggregate root, holding references to LedgerLines. LedgerAccount should be the other aggregate root. LedgerLines would hold the corresponding LedgerAccount's ID.
While this makes a lot of sense, I'm having trouble figuring out how to update the balance of ledger accounts when ledger lines are added. The above article suggests that the balance should be calculated on the fly, which means it wouldn't need to be updated when LedgerEntrys are added. However, I'm using Amazon QLDB for the ledger, and their solutions engineer specifically recommended that the balance should be computed and stored on the LedgerAccount since QLDB is not optimized for such kind of "scanning through lots of documents" operation.
Now the dilemma ensues:
If I update the balance field synchronously when adding LedgerEntrys, then I would be updating two aggregates in one operation, which violates the consistency boundary.
If I update the balance field asynchronously after receiving the event emitted by the "Add LedgerEntry" operation, then I could be reading a stale balance on the account if I add another LedgerEntry that spends the balance on the account, which could lead to overdrafts.
If I subsume the LedgerAccount model into the same aggregate of LedgerEntry, then I lose the ability to add/remove individual LedgerAccount since I can't query them directly.
If I get rid of the balance field and compute it on the fly, then there could be performance problems given (1) QLDB limitation (2) the fact that the number of ledger lines is unbounded.
So what's the proper design here? Any help is greatly appreciated!
You could use Saga Pattern to ensure the whole process completes or fails.
Here's a primer ... https://medium.com/#lfgcampos/saga-pattern-8394e29bbb85
I'd add 'reserved funds' owned collection to the Ledger Account.
A Ledger Account will have 'Actual' balance and 'Available Balance'.
'Available Balance' is 'Actual' balance less the value of 'reserved funds'
Using a Saga to manage the flow:
Try to reserve funds on the Account aggregate. The Ledger Account will check its available balance (actual minus total of reserved funds) and if sufficient, add another reserved funds to its collection. If reservation succeeds, the account aggregate will return a reservation unique id. If reservation fails, then the entry cannot be posted.
Try to complete the double entry bookkeeping. If it fails, send a 'release reservation' command to the Account aggregate quoting the reservation unique id, which will remove the reservation and we're back to where we started.
After double entry bookkeeping is complete, send a command to Account to 'complete' reservation with reservation unique id. The Account aggregate will then remove the reservation and adjust its actual balance.
In this way, you can manage a distributed transaction without the possibility of an account going overdrawn.
An aggregate root should serve as a transaction boundary. A multi-legged transaction spans multiple accounts, hence an account cannot be.
So a ledger itself is an aggregate root. An accounting transaction should correspond to database transaction.
Actually, "ledger itself" doesn't mean a singleton. It can be org branch * time period ledger. And it usually is in non-computer event-sourcing systems.
Update.
A ledger account balances is merely a view into the ledger. And as a view it has a state as of some known event. When making up a decision whether to accept an operation or not, you should make sure that the actual state of the ledger is the latest state processed as of the balances. If it is not - the newer events should be processed first, and then an account operation should be tried again.
I have an operation with X meters, the number of meter may vary.
For each meter, I have to set a percentage of allocation.
So let's say, in my Operation 1, I have 3 meters, m1, m2, m3, I will assign 10% for m1, 50% for m2, and 40% for m3.
So, in this case, when I receive data from m1, I will want to check that operation 1 and meter 1 exists, that meter 1 belongs to operation 1, and get the repartition for my meter.
All those settings are present in an external DB (postgres). I can get it easily in Golang. Thing is I heard that chaincode must be deterministic, and it is a good practice not to have any external dependency. I understand that if the result of your chaincode depends on an external DB, you will not be able to audit this one, so the whole blockchain lose a bit of interest.
Should I hardcode it in an array or in a config file ? So each time I have a config change, I must publish my chaincode again ? I am not so happy on having 2 configs to sync ( DB + config file in DB), it might quickly lead to mistakes.
What is the recommended way of managing external DB connection in a chaincode ?
You could place the "meter information" into the blockchain data store and query it from there?
For instance the application may be used:
maintain the state of all meters, with what ever information they require. This data is written to the fabric state store, where is may be queried.
perform an additional transaction that contains the logic required to query meter information and act accordingly
In the above case the chaincode will be able to update meter information and act on information stored via queries and subsequent action.
Everything is then on-chain, therefore it is accessible, updatable and auditable
I have been trying to solve an issue, which is to prevent another query from being served until a series of transaction have been completed.
I am thinking that when a user fires two or more simultaneous request to Node server, it might cause some issues when read/write to MongoDB, thus posing a security issue. The pseudocode looks like this:
//When user buy something, check if the user has sufficient balance, then send a deliver order and deduct the balance.
app.post('/buy', (req, res) => {
//Step 1: Get balance from MongoDB with mongoose
balance.find(...)
// Step 2: Check if balance sufficient, issue a delivery order
if (balance >= price){ //run delivery order code to deliver item }
// Step 3: Deduct balance, then write new balance back to Mongo database
balance = balance - price;
balance.findOneAndUpdate(...)
})
Here lies a problem. If the user simultaneously fire two or more requests, each might have a chance to read the database. Because of this, they pass the balance check, and their actions will be succesfully completed with the delivery orders. If the user only have enough balance to buy one item, it will cause a 'double spending' problem because the user will have successfully bought more than once.
My question for this situation is: How to prevent the next query (which will be within millisecond after the previous query) from being run at Step 1, until all the transactions have been completed for the previous query (until Step 3 is finished)?
MongoDB documentation mentioned something like concurrency and locking, however it is not stated how to use them in a series of transaction, as above. It is also not clear if the so called 'multi-document transaction' is applicable in this situation, also lacking the code to show how to use them. Stack Overflow has only a few related questions, but the solutions are a bit vague, and almost all don't have a solution code as a reference.
MongoDB does implement transactions as of 4.0, so you should use them if you are looking for transactional behavior across multiple operations.
The linked page also provides examples in various languages.
I am aware there is a lot of topics on set validation and i won’t say i have read every single one of them but i’ve read a lot and still don’t feel i’ve seen some definite answer that doesn’t smell hackish.
Consider this:
we have a concept of Customer
Customer has some general details data
Customer can make Transaction (buying things from the store)
if Customer is in credit mode then he has a limit of how much he can spend in a year
number of Transactions per Customer per year can be huge (thousands+)
it is critical that Customer never spents a cent over a limit (there is no human delivering goods that would check limit manually)
Customer can either create new Transaction or add items to existing ones and for both the limit must be checked
Customer can actualy be a Company behind which there are many Users making actual transactions meaning Transactions can be created/updated concurrently
Obviously, i want to avoid loading all Transactions for Customer when creating new or editing existing Transaction as it doesn’t scale well for huge number of Transactions.
If i introduce aggregate dedicated to check currentLimitSpent before create/update Transaction then i have non-transactional create/update (one step to check currentLimitSpent and then another for create/update Transaction).
I know how to implement this if i don’t care about all ddd rules (or if its eventual consistency approach) but i am wondering if there is some idiomatic ddd way of solving this kinds of problems with strict consistency that doesnt involve loading all Transactions for every Transaction create/update?
it is critical that Customer never spents a cent over a limit (there
is no human delivering goods that would check limit manually)
Please read this couple of posts: RC Dont Exist and Eventual Consistency
If the systems owners still think that the codition must be honored then, to avoid concurrency issues, you could use a precomputed currentLimitSpent stored in persistence (because no Event Sourcing Tag in your question) to check the invariant and use it as optimistic concurrency flag.
Hidrate your aggregate with currentLimitSpent and any other data you need from persistence.
Check rules (customerMaxCredit <= currentLimitSpent + newTransactionValue).
Persist (currentLimitSpent + newTransactionValue) as the new currentLimitSpent.
If currentLimitSpent has changed in persistence while the aggregate was working (many Users in the same Company making transactions) you should get a optimisticConcurrency error from persistence.
You could stop on exception or rehidrate the aggregate and try again.
This is a overview. It can not be more detailed without entering into tech stack details and architectural design.