I need to know transaction size to calculate fee user is going to spend by sending BTC. I use bitcoind wallet with many accounts and use sendtoaddress call to send BTC. Is there any way to know how many outputs bitcoind will use to create transaction? Or maybe other way to know transaction size before bitcoind executes it...
In this case you need to create the transaction by yourself
Is there any way to know how many outputs bitcoind will use to create transaction?
The outputs are defined by you, I guess, what you are looking for here are the inputs (UTXOs), for sample sendtoaddress is defined to create a transaction with one output, while sendmany will create a transaction with multiple outputs as you provide in the parameters.
Using RPC, you can skip the input selection by doing as the following example:
// creates a rawtransaction with no inputs
bitcoin-cli createrawtransaction "[]" "{\"btc01...receiveraddress\":0.01}"
// fund the transaction with the missing inputs and calculate the fee
bitcoin-cli fundrawtransaction ...hash_response_from_createrawtransaction
fundrawtransaction will add any missing inputs and calculate the fee as you will see in the response.
If you still want the transaction size, you can get it by calling decoderawtransaction with the hash you have generated, or just calculating it by the hash itself, just divide the hash size by two.
For more control while creating your transactions, I'd suggest you to use listunspent and select the inputs by yourself
Docs:
createrawtransaction
Worth to give a look to this transaction size calculator to understand how to calculate the transaction size based in the (in/out)puts.
Related
I'm trying to use application insights to keep track of a counter of number of active streams in my application. I have 2 goals to achieve:
Show the current (or at least recent) number of active streams in a dashboard
Activate a kind of warning if the number exceeds a certain limit.
These streams can be quite long lived, and sometimes brief. So the number can sometimes change say 100 times a second, and sometimes remain unchanged for many hours.
I have been trying to track this active streams count as an application insights metric.
I'm incrementing a counter in my application when a new stream opens, and decrementing when one closes. On each change I use the telemetry client something like this
var myMetric = myTelemetryClient.GetMetric("Metricname");
myMetric.TrackValue(myCount);
When I query my metric values with Kusto, I see that because of these clusters of activity within a 10 sec period, my metric values get aggregated. For the purposes of my alarm, I can live with that, as I can look at the max value of the aggregate. But I can't present a dashboard of the number of active streams, as I have no way of knowing the number of active streams between my measurement points. I know the min value, max and average, but I don't know the last value of the aggregate period, and since it can be somewhere between 0 and 1000, its no help.
So the solution I have doesn't serve my needs, I thought of a couple of changes:
Adding a scheduled pump to my counter component, which will send the current counter value, once every say 5 minutes. But I don't like that I then have to add a thread for each of these counters.
Adding a timer to send the current value once, 5 minutes after the last change. Countdown gets reset each time the counter changes. This has the same problem as above, and does an excessive amount of work to reset the counter when it could be changing thousands of times a second.
In the end, I don't think my needs are all that exotic, so I wonder if I'm using app insights incorrectly.
Is there some way I can change the metric's behavior to suit my purposes? I appreciate that it's pre-aggregating before sending data in order to reduce ingest costs, but it's preventing me from solving a simple problem.
Is a metric even the right way to do this? Are there alternative approaches within app insights?
You can use TrackMetric instead of the GetMetric ceremony to track individual values withouth aggregation. From the docs:
Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can use the TrackMetric() method to send the resulting aggregates.
But you can also use events as described next:
If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent (Microsoft.ApplicationInsights.DataContracts.EventTelemetry).
For example, we have a bank record, we use a query to get all the bank's record, I just wanted to create a function who simply return the total bank record and return number only
Do you mean the total number of records in CouchDB or just a particular type of record?
Anyhow, I'll propose solutions for both assuming you're using CouchDB as your state DB.
Reading the total number of records present in CouchDB from chaincode will just be a big overhead. You can simply make a GET API call like this http://couchdb.server.com/mydatabase and you'd get a JSON back looking something like this:
{
"db_name":"mydatabase",
"update_seq":"2786-g1AAAAFreJzLYWBg4MhgTmEQTM4vTc5ISXLIyU9OzMnILy7JAUoxJTIkyf___z8riYGB0RuPuiQFIJlkD1Naik-pA0hpPExpDj6lCSCl9TClwXiU5rEASYYGIAVUPR-sPJqg8gUQ5fvBygMIKj8AUX4frDyOoPIHEOUQt0dlAQB32XIg",
"sizes":{
"file":13407816,
"external":3760750,
"active":4059261
},
"purge_seq":0,
"other": {
"data_size":3760750
},
"doc_del_count":0,
"doc_count":2786,
"disk_size":13407816,
"disk_format_version":6,
"data_size":4059261,
"compact_running":false,
"instance_start_time":"0"
}
From here, you can simply read the doc_count value.
However, if you want to read the total number of docs in chaincode, then I should mention that it'll be a very costly operation and you might get a timeout error if the number of records is very high. For a particular type of record, you can use Couchdb selector syntax.
If you want to read all the records, then you can use getStateByRange(startKey, endKey) method and count all the records.
I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.
We are making 200 transactions in a loop for sending ether from one address to another address, all transaction should execute and return either success or fail.
But Some transactions are not executing i.e. we are not getting any results for those transactions neither success nor fail.
Steps to reproduce the behavior
Make 200 transactions in a loop to send ether from one address to another address
eth.sendTransaction({
from: privateWeb3.eth.coinbase,
to: result,
value: privateWeb3.toWei(2, 'ether')
}
check total no of results.
Total no of results will be less than total no. of transactions given
A common cause of this is duplicated nonces. Each transaction includes a consecutive increasing number called nonce. If you generate transactions too fast and geth didn't update fast enough, it will reuse the last one. So you will generate two transaction with the same nonce, in such case geth will reject one.
I am new in Apex. I want to write a trigger in apex for before insert. I have two standard objects (Contact, Opportunity).
SELECT sum(amount), Bussiness__c FROM opportunity
WHERE stagename='Closed Won' and id='006i000000Kt683AAB' GROUP BY Bussiness__c
I want when trigger runs this get sum(Amount) field and Bussiness__c value and then update Contact Total_Business__c with Sum(Amount) Value. Here Bussiness__C is contact id at opportunity object.
Thanks in advance and Waiting for your positive Response.
I'm assuming yo don't have currencies enabled in your org (if you'll see "CurrencyIsoCode" somewhere on your objects you'll have to modify this design a bit).
I am a lazy person and you didn't write anything about amount of data you expect. What I've written will work when there's reasonable amount of Opportunities per contact. If you'll start hitting the governor limit of 50K query rows it'd have to be done differently (I'll write a bit about it at the end).
I am not going to give you a ready solution because "homemade rollup summary" is one of assignments you might encounter during SF DEV 501 certification. I'll just outline some pointers and food for thought.
I wouldn't do it before insert, it's easier in after insert, after update (you didn't think about recalculation when the Amount changes, did you?). There should also be something said about after delete, after undelete if your users are allowed to delete Opportunities.
First thing is to build a set of "contacts we'll have to recalculate":
Set<Id> contactIds = new Set<Id>();
for(Opportunity o : trigger.old){
contactIds.add(o.Business__c);
}
for(Opportunity o : trigger.new){
contactIds.add(o.Business__c);
}
contactIds.remove(null);
This forces recalculation for all related contacts and ignores opportunities without contact. It'll fire always... which is not the best thing because on insert, delete, undelete you'd want it to fire always but on update you'd want it to fire only when Amount or Contact changes (trigger.old will hold different contact than trigger.new). You can control these scenarios by using stuff like Trigger.isUpdate, read up about it.
Anyway - you got an unique set of Contact Ids. I've said I'd do it in "after" trigger because at that point the new Amount is already saved to database and you can query it back from it:
SELECT Business__c, SUM(Amount) sumAmount
FROM Opportunity
WHERE Business__c IN :contactIds
This type of queries returns an "AggregateResult" that you'll have to parse like that:
List<Contact> contactsToUpdate = new List<Contact>();
for(AggregateResult ar : [SELECT Business__c, SUM(Amount) sumAmount
FROM Opportunity
WHERE Business__c IN :contactIds]){
System.debug(ar);
contactsToUpdate.add(new Contact(Id = (Id) ar.get('Business__c'),
Total_Business__c = (Double) ar.get('sumAmount)
);
}
update contactsToUpdate;
As I said - it's a basic outline, should get you started.
This thing queries all opportunities for given contact. Your trigger can fire on at most 200 Opps. Imagine a situation where you change contact on all 200 opps -> gives you 400 contacts you need to update to clear/fix old value and to set new value. With 50K rows limit, assuming no other business logic is triggered (like update of Accounts? Action that started because some Opportunity Products were added?) it gives you problems when on average 1 contact is involved in 125 Opps. It sounds like a ridiculous problem but there are scenarios when you need to do it differently.
In such cases you can attack it from another angle. You don't really need to query all opps for given Contact, it's lazy. You couuld instead learn the current value of total business (put 0 if it happens to be null) and then add/substract all changes to the amount as needed, looking only at your trigger.old and trigger.new. It makes for more code and more planning upfront but the performance will increase significantly and this solution will scale as the amount of opps grow (it'll continue to look at only the current max of 200 opps in the trigger's scope).
Another approach would be to accept some delay in this rollup summary and write a batch job for it.