Geth 'sendTransaction' not working for some transactions while making much transactions in a loop - p2p

We are making 200 transactions in a loop for sending ether from one address to another address, all transaction should execute and return either success or fail.
But Some transactions are not executing i.e. we are not getting any results for those transactions neither success nor fail.
Steps to reproduce the behavior
Make 200 transactions in a loop to send ether from one address to another address
eth.sendTransaction({
from: privateWeb3.eth.coinbase,
to: result,
value: privateWeb3.toWei(2, 'ether')
}
check total no of results.
Total no of results will be less than total no. of transactions given

A common cause of this is duplicated nonces. Each transaction includes a consecutive increasing number called nonce. If you generate transactions too fast and geth didn't update fast enough, it will reuse the last one. So you will generate two transaction with the same nonce, in such case geth will reject one.

Related

Estimate tx size

I need to know transaction size to calculate fee user is going to spend by sending BTC. I use bitcoind wallet with many accounts and use sendtoaddress call to send BTC. Is there any way to know how many outputs bitcoind will use to create transaction? Or maybe other way to know transaction size before bitcoind executes it...
In this case you need to create the transaction by yourself
Is there any way to know how many outputs bitcoind will use to create transaction?
The outputs are defined by you, I guess, what you are looking for here are the inputs (UTXOs), for sample sendtoaddress is defined to create a transaction with one output, while sendmany will create a transaction with multiple outputs as you provide in the parameters.
Using RPC, you can skip the input selection by doing as the following example:
// creates a rawtransaction with no inputs
bitcoin-cli createrawtransaction "[]" "{\"btc01...receiveraddress\":0.01}"
// fund the transaction with the missing inputs and calculate the fee
bitcoin-cli fundrawtransaction ...hash_response_from_createrawtransaction
fundrawtransaction will add any missing inputs and calculate the fee as you will see in the response.
If you still want the transaction size, you can get it by calling decoderawtransaction with the hash you have generated, or just calculating it by the hash itself, just divide the hash size by two.
For more control while creating your transactions, I'd suggest you to use listunspent and select the inputs by yourself
Docs:
createrawtransaction
Worth to give a look to this transaction size calculator to understand how to calculate the transaction size based in the (in/out)puts.

all transactions fails in dynamodb when condition for one of the transaction fails

Data in products table in dynamoDB.
[
{
productId:123,
quantity:50
},
{
productId:4565,
quantity:10
}
// more items
]
Now, a client can order one or more than one product at once. Now, suppose the client is ordering products 123 & 4565 with quantity 30 & 12 respectively.
The client can purchase product 123, but he can not purchase product 4565 because it has less quantity than the client wants.
I am using AWS docClient and dc.transactWrite() method to achieve this. But, the problem with transactWrite is that, if one of the conditions fails then all transactions will fail.
Implementation of Atomic Transactions in dynamodb
ConditionExpression for transactWrite
// QN - quantity
// :val - entered by client
ConditionExpression: '#QN >= :val'
Basically, I want to update the product which has the available quantity and give some information about the transaction which has not enough quantity.
Is there any way to achieve this, or I have to manually called documentClient.update() for every product.
The whole point of using a transaction is to ensure if one fails, nothing is changed.
That's the very definition of "transaction" in a DB.
Seems like you should just use BatchWriteItem()
The individual PutItem and DeleteItem operations specified in
BatchWriteItem are atomic; however BatchWriteItem as a whole is not.
If any requested operations fail because the table's provisioned
throughput is exceeded or an internal processing failure occurs, the
failed operations are returned in the UnprocessedItems response
parameter.

how to write a function in chaincode that simply count the total records and return total number.hyperledger fabric

For example, we have a bank record, we use a query to get all the bank's record, I just wanted to create a function who simply return the total bank record and return number only
Do you mean the total number of records in CouchDB or just a particular type of record?
Anyhow, I'll propose solutions for both assuming you're using CouchDB as your state DB.
Reading the total number of records present in CouchDB from chaincode will just be a big overhead. You can simply make a GET API call like this http://couchdb.server.com/mydatabase and you'd get a JSON back looking something like this:
{
"db_name":"mydatabase",
"update_seq":"2786-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8riYGB0RuPuiQFIJlkD1Naik-pA0hpPExpDj6lCSCl9TClwXiU5rEASYYGIAVUPR-sPJqg8gUQ5fvBygMIKj8AUX4frDyOoPIHEOUQt0dlAQB32XIg",
"sizes":{
"file":13407816,
"external":3760750,
"active":4059261
},
"purge_seq":0,
"other": {
"data_size":3760750
},
"doc_del_count":0,
"doc_count":2786,
"disk_size":13407816,
"disk_format_version":6,
"data_size":4059261,
"compact_running":false,
"instance_start_time":"0"
}
From here, you can simply read the doc_count value.
However, if you want to read the total number of docs in chaincode, then I should mention that it'll be a very costly operation and you might get a timeout error if the number of records is very high. For a particular type of record, you can use Couchdb selector syntax.
If you want to read all the records, then you can use getStateByRange(startKey, endKey) method and count all the records.

How to use synchronous messages on rabbit queue?

I have a node.js function that needs to be executed for each order on my application. In this function my app gets an order number from a oracle database, process the order and then adds + 1 to that number on the database (needs to be the last thing on the function because order can fail and therefore the number will not be used).
If all recieved orders at time T are processed at the same time (asynchronously) then the same order number will be used for multiple orders and I don't want that.
So I used rabbit to try to remedy this situation since it was a queue. It seems that the processes finishes in the order they should, but a second process does NOT wait for the first one to finish (ack) to begin, so in the end I'm having the same problem of using the same order number multiple times.
Is there anyway I can configure my queue to process one message at a time? To only start process n+1 when process n has been acknowledged?
This would be a life saver to me!
If the problem is to avoid duplicate order numbers, then use an Oracle sequence, or use an identity column when you insert into a table to generate the order number:
CREATE TABLE mytab (
id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1),
data VARCHAR2(20));
INSERT INTO mytab (data) VALUES ('abc');
INSERT INTO mytab (data) VALUES ('def');
SELECT * FROM mytab;
This will give:
ID DATA
---------- --------------------
1 abc
2 def
If the problem is that you want orders to be processed sequentially, then don't pull an order from the queue until the previous one is finished. This will limit your throughput, so you need to understand your requirements and make some architectural decisions.
Overall, it sounds Oracle Advanced Queuing would be a good fit. See the node-oracledb documentation on AQ.

Getting Multiple Last Price Quotes from Interactive Brokers's API

I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.

Resources