I am working on a financial project where I am trying to store amount for my transactions, the price can store up to two decimal, I am trying to choose Schema type for my amount field, I first thought of Numberwith roundto:2, another option is to store them in Number Decimal type.
Now as my prices can only go to upto 2 decimals, so should I stick with default Numberwith roundto:2 or there can be some issues where decimals can get rounded of.
Also is there any difference between the number of bytes to store the values in Number and in Number Decimal?
Thanks
Use NumberDecimal, of course. One should never use regular floating-point numbers for money (they can't represent most values exactly).
Demonstration:
db.numbers.insert({fp: 0.1, dec: NumberDecimal('0.1')})
db.numbers.insert({fp: 0.2, dec: NumberDecimal('0.2')})
db.numbers.aggregate([
{
$group: {
_id: 1,
total_fp: { $sum: "$fp"},
total_dec: { $sum: "$dec"}
}
}
])
// { "_id" : 1, "total_fp" : 0.30000000000000004, "total_dec" : NumberDecimal("0.3") }
Related
I'm writing indexing policies for my collection, and trying to figure out what is the right "Precision" for String in Hash Index, i.e.
collection.IndexingPolicy.IncludedPaths.Add(
new IncludedPath {
Path = "/customId/?",
Indexes = new Collection<Index> {
new HashIndex(DataType.String) { Precision = 20 } }
});
There will be around 10,000 different customId, so what is the right "Precision"? What if it gets more than 100,000,000 ids?
There will be around 10,000 different customId, so what is the right "Precision"? What if it gets more than 100,000,000 ids?
As Andrew Liu said in this thread: The indexing precision for a hash index indicates the number of bytes to hash the property value to.
And as we know, 1 bytes = 8 bits, which can hold 2^8 = 256 values. 2 bytes can hold 2^16 = 65,536 values, and so forth. You could do similar calculation to get the indexing precision based on the number of documents that you expect to contain the path for property customId.
Besides, you could refer to Index precision section in this article and tradeoff between index storage overhead and query performance when specifying Index precision.
I am trying solve some matrices calculations using the MathNet.Numericslibraries. It all works fine with double numbers. However now I want to represent numbers as fractions and want to get the answers to the calculations as fractions. How can I do that?
What I am currently doing is this.
var M = Matrix<double>.Build;
var V = Vector<double>.Build;
double [,] x1 = {
{0, 0, 0},
{1.0/2, 0 , 0},
{1.0/2, 1.0, 1.0}
};
var m = M.DenseOfArray(x1);
These fractions gets converted into doubles and the final answer will be in doubles. I want to retain fractions throughout the calculation.
There are no fractions in your code sample. The expression "1.0/2" in C# is not a fraction but another way to write the double literal "0.5d". In fact there is no fraction data type in the .Net framework at all.
The F# extensions of Math.NET Numerics do provide a BigRational type which implements fractions based on BigIntegers, but Math.NET Numerics does not support vectors or matrices of this value type either. Math.NET Symbolics might support this in the future but it's not there yet.
How can I make that a float number only have 4 decimal places at most.
In my J2ME app I have two fields : unitPrice (4 decimal places) and quantity(3 decimal places) and when I multiply them I got number with more decimals than I need:
unitPrice :5.6538
quantity: 5
result: 28.269001
What can I do to have a result of only 4 decimals? and in general what do I need to do to use floats with a specific number of decimals.
Of course, if you were using Java SE the solution would be BigDecimal.
You could round as shown in the result initialization in the following program:
import java.math.BigDecimal;
public class Test {
public static void main(String args[]) {
float unitPrice = 5.6538f;
float quantity = 5;
float rawResult = quantity*unitPrice;
System.out.println(rawResult);
float result = Math.round(10000f*rawResult)/10000f;
System.out.println(result);
System.out.println(new BigDecimal(result));
}
}
The output is:
28.269001
28.269
28.2689990997314453125
Unfortunately, as shown by the final printout using BigDecimal, result is not really exactly 28.269. 28.269 is not exactly representable in any binary fraction format. That could affect future calculations if decimal fractions are really important.
As an alternative, consider doing everything in integers, with each type of data having an associated power-of-ten factor. For unit price, the factor would be 10,000. For quantity, it would be 1000.
For the product, you want it to be 10,000. The intermediate result of doing the multiplication will have a factor of 10^7, so divide by 1000 and round to an integer.
in vb6.0 we can use RonudOff() function for this operation;
example:
unitPrice :5.6538
quantity: 5
result: 28.269001
then " RounOff(result)" will give 28.2690;
i think their may be some similar function in java also
thanks
I work on machine learning application. I use underscorejs when I need to operate with arrays and hashes.
The question is following, in ML there is a cross-validation approach, when you need to calculate performance for several folds.
For each fold, I have a hash of parameters of performance, like following
{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
}
I push all hashes to the array, at the end I have an array of the hashes, like following
[ { 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
{ 'F1': 0.5,
'Precision': 0.6,
'Recall':0.4
},
{ 'F1': 0.4,
'Precision': 0.3,
'Recall':0.4
}
]
The question is, at the end I want to calculate the average for each parameter of the hash, i.e. I want to sum up all hashes by parameters and then divide every parameters by the number of folds, in my case 3.
If there are any elegant way to do so with underscore and javascript?
One important point is sometimes I need to do this aggregation, when the hash for fold like the following
{
label1:{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
label2:{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
...
}
The task is the same, average of F1, Precision, Recall for every label among all folds.
Currently I have some ugly solution that run over all hash several times, I would appreciate any help, thank you.
If it is an array, just use the array. If it is not an array, use _.values to turn it into one and use that. Then, we can do a fold (or reduce) over the data:
_.reduce(data, function(memo, obj) {
return {
F1: memo.F1 + obj.F1,
Precision: memo.Precision + obj.Precision,
Recall: memo.Recall + obj.Recall,
count: memo.count + 1
};
}, {F1: 0, Precision: 0, Recall: 0, count: 0})
This returns a hash containing F1, Precision, and Recall, which are sums, and count, which is the number of objects. It should be pretty easy to get an average from those.
When I use the spellcheck component in Solr 4.6 and I get more then one result in the suggestion list, what is the order of these results?
Example (german):
searching for "deutch"
result:
..."spellcheck": {
"suggestions": [
"deutch",
{
"numFound": 5,
"startOffset": 0,
"endOffset": 6,
"suggestion": [
"deutsch",
"dutch",
"deutsche",
"durch",
"death"
]
},
...
Thanks for answering!
By default, distance and popularity.
It calculates the Levenshtein distance, and sorts first by that, and then sorts within each group based on how frequently each possible replacement appears in the index.
deutsch - distance: 1
dutch - distance: 1
deutsche - distance: 2
durch - distance: 2
death - distance: 2
Presumably "Deutsch" appears more often than "dutch", and "deutsche" more often than "durch" or "death".