Storing Time on chaincode - hyperledger-fabric

Currently developing a chaincode and I have a doubt regarding storing dates.
If I have something like this:
result := XX{
Timestamp: time.Now().Format(time.RFC3339Nano),
ChangeSource: sourceOfChange,
}
stub.putState("result", result)
And by having a consensus, will that work?
Will the Timestamp be equal between all the peers? Will this pass a consensus?

No, it will not work reason is once the chaincode is executed the response is sent back to client where client evaluates whether all the responses are Same if they are different which in your case will be then the transactions are not sent for ordering

Related

ProposalResponsePayloads do not match - ERC-1155 Chaincode / fabric-samples

I follow ERC-1155 chaincode example for Fabric. When I run the BatchTransferFrom part. It sometimes gives error and sometimes runs successfully. I cannot understand why it fails sometimes. Is this error normal when invoking chaincode functions on Fabric?
The error is:
Error: could not assemble transaction: ProposalResponsePayloads do not match - proposal response: version:1 response:<status:200 > payload: ...
When I call the command using Fabric Node SDK API, it gives the following error:
2021-08-30T09:59:41.794Z - error: [DiscoveryHandler]: compareProposalResponseResults[undefined] - read/writes result sets do not match index=1
2021-08-30T09:59:41.794Z - error: [Transaction]: Error: No valid responses from any peers. Errors:
peer=undefined, status=grpc, message=Peer endorsements do not match
When you perform a transaction, all the responses from different endorsements in the transaction must match.
For any reason, this is not happening with your proposals. Different peers return different responses.
I don't know about that specific chaincode, but common causes are:
Using pseudo-random values.
Using current timestamps instead of the ones from transaction or block.
Serializing a JSON in an undeterministic way, so that it results in different strings, as elements have been serialized in different order.
Etc.
I found the cause of the problem. Iterating maps in Go is not deterministic and the function BatchTransferFrom uses maps. The map is iterated in different orders in different peers and as a result, this causes the proposals to be different.

Estimate tx size

I need to know transaction size to calculate fee user is going to spend by sending BTC. I use bitcoind wallet with many accounts and use sendtoaddress call to send BTC. Is there any way to know how many outputs bitcoind will use to create transaction? Or maybe other way to know transaction size before bitcoind executes it...
In this case you need to create the transaction by yourself
Is there any way to know how many outputs bitcoind will use to create transaction?
The outputs are defined by you, I guess, what you are looking for here are the inputs (UTXOs), for sample sendtoaddress is defined to create a transaction with one output, while sendmany will create a transaction with multiple outputs as you provide in the parameters.
Using RPC, you can skip the input selection by doing as the following example:
// creates a rawtransaction with no inputs
bitcoin-cli createrawtransaction "[]" "{\"btc01...receiveraddress\":0.01}"
// fund the transaction with the missing inputs and calculate the fee
bitcoin-cli fundrawtransaction ...hash_response_from_createrawtransaction
fundrawtransaction will add any missing inputs and calculate the fee as you will see in the response.
If you still want the transaction size, you can get it by calling decoderawtransaction with the hash you have generated, or just calculating it by the hash itself, just divide the hash size by two.
For more control while creating your transactions, I'd suggest you to use listunspent and select the inputs by yourself
Docs:
createrawtransaction
Worth to give a look to this transaction size calculator to understand how to calculate the transaction size based in the (in/out)puts.

how to write a function in chaincode that simply count the total records and return total number.hyperledger fabric

For example, we have a bank record, we use a query to get all the bank's record, I just wanted to create a function who simply return the total bank record and return number only
Do you mean the total number of records in CouchDB or just a particular type of record?
Anyhow, I'll propose solutions for both assuming you're using CouchDB as your state DB.
Reading the total number of records present in CouchDB from chaincode will just be a big overhead. You can simply make a GET API call like this http://couchdb.server.com/mydatabase and you'd get a JSON back looking something like this:
{
"db_name":"mydatabase",
"update_seq":"2786-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8riYGB0RuPuiQFIJlkD1Naik-pA0hpPExpDj6lCSCl9TClwXiU5rEASYYGIAVUPR-sPJqg8gUQ5fvBygMIKj8AUX4frDyOoPIHEOUQt0dlAQB32XIg",
"sizes":{
"file":13407816,
"external":3760750,
"active":4059261
},
"purge_seq":0,
"other": {
"data_size":3760750
},
"doc_del_count":0,
"doc_count":2786,
"disk_size":13407816,
"disk_format_version":6,
"data_size":4059261,
"compact_running":false,
"instance_start_time":"0"
}
From here, you can simply read the doc_count value.
However, if you want to read the total number of docs in chaincode, then I should mention that it'll be a very costly operation and you might get a timeout error if the number of records is very high. For a particular type of record, you can use Couchdb selector syntax.
If you want to read all the records, then you can use getStateByRange(startKey, endKey) method and count all the records.

How to fix a race condition in node js + redis + mongodb web application

I am building a web application that will process many transactions a second. I am using an Express Server with Node Js. On the database side, I am using Redis to store attributes of a user which will fluctuate continuously based on stock prices. I am using MongoDB to store semi-permanent attributes like Order configuration, User configuration, etc.,
I am hitting a race condition when multiple orders placed by a user are being processed at the same time, but only one would have been eligible as a check on the Redis attribute which stores the margin would not have allowed both the transactions.
The other issue is my application logic interleaves Redis and MongoDB read + write calls. So how would I go about solving race condition across both the DBs
I am thinking of trying to WATCH and MULTI + EXEC on Redis in order to make sure only one transaction happens at a time for a given user.
Or I can set up a Queue on Node / Redis which will process Orders one by one. I am not sure which is the right approach. Or how to go about implementing it.
This is all pseudocode. Application logic is a lot more complex with multiple conditions.
I feel like my entire application logic is a critical section ( Which I think is a bad thing )
//The server receives a request from Client to place an Order
getAvailableMargin(user.username).then((margin) => { // REDIS call to fetch margin of user. This fluctuates a lot, so I store it in REDIS
if (margin > 0) {
const o = { // Prepare an order
user: user.username,
price: orderPrice,
symbol: symbol
}
const order = new Order(o);
order.save((err, o) => { // Create new Order in MongoDB
if (err) {
return next(err);
}
User.findByIdAndUpdate(user._id, {
$inc: {
balance: pl
}
}) // Update balance in MongoDB
decreaseMargin(user.username) // decrease margin of User in REDIS
);
}
});
Consider margin is 1 and with each new order margin decreases by 1.
Now if two requests are received simultaneously, then the margin in Redis will be 1 for both the requests thus causing a race condition. Also, two orders will now be open in MongoDB as a result of this. When in fact at the end of the first order, the margin should have become 0 and the second order should have been rejected.
Another issue is that we have now gone ahead and updated the balance for the User in MongoDB twice, one for each order.
The expectation is that one of the orders should not execute and a retry should happen by checking the new margin in Redis. And the balance of the user should also have updated only once.
Basically, would I need to implement a watch on both Redis and MongoDB
and somehow retry a transaction if any of the watched fields/docs change?
Is that even possible? Or is there a much simpler solution that I might be missing?

Geth 'sendTransaction' not working for some transactions while making much transactions in a loop

We are making 200 transactions in a loop for sending ether from one address to another address, all transaction should execute and return either success or fail.
But Some transactions are not executing i.e. we are not getting any results for those transactions neither success nor fail.
Steps to reproduce the behavior
Make 200 transactions in a loop to send ether from one address to another address
eth.sendTransaction({
from: privateWeb3.eth.coinbase,
to: result,
value: privateWeb3.toWei(2, 'ether')
}
check total no of results.
Total no of results will be less than total no. of transactions given
A common cause of this is duplicated nonces. Each transaction includes a consecutive increasing number called nonce. If you generate transactions too fast and geth didn't update fast enough, it will reuse the last one. So you will generate two transaction with the same nonce, in such case geth will reject one.

Resources