Large number division is too slow using bignumber.js in node.js - node.js

I am using this library https://www.npmjs.com/package/big-number to perform division of two large numbers:
My function has the following code:
var x = new BigNumber(val);
var y = new BigNumber(100000000);
return x.dividedBy(y).toNumber();
This code is called 100 times on my machine and it takes about 10 seconds for it to execute. It runs much faster on another machine, however we have limited resources in the cloud and I would want to optimize this.
What can I do to optimize this?
I am using the classical for loop to do the 100 iterations.

Assuming you are working with integers, there is a built-in BigInt type in JavaScript which will give you the best performance:
let x = BigInt(val);
let y = 100000000n; // BigInt literals end in "n"
return Number(x / y);

Related

Julia 1.5.2 Performance Questions

I am currently attempting to implement a metaheuristic (genetic) algorithm. In this venture i also want to try and create somewhat fast and efficient code. However, my experience in creating efficient coding is not very great. I was therefore wondering if some people could give some "quick tips" to increase the efficiency of my code. I have created a small functional example of my code which contains most of the elements that the code will contain i regards to preallocating arrays, custom mutable structs, random numbers, pushing into arrays etc.
The options that I have already attempted to explore are options in regards to the package "StaticArrays". However many of my arrays must be mutable (so we need MArrays) and many of them will become very large > 100. The documentation of StaticArrays specify that the size of the StaticArrays package must remain small to remain efficient.
According to the documentation Julia 1.5.2 should be thread safe in regards to rand(). I have therefor attempted to multithread for-loops in my functions to make them run faster. And this results in a slight performance increase .
However if people can se a more efficient way of allocating Arrays or pushing in SpotPrices into an array it would be greatly appreciated! Any other performance tips are also very welcome!
# Packages
clearconsole()
using DataFrames
using Random
using BenchmarkTools
Random.seed!(42)
df = DataFrame( SpotPrice = convert(Array{Float64}, rand(-266:500,8832)),
month = repeat([1,2,3,4,5,6,7,8,9,10,11,12]; outer = 736),
hour = repeat([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]; outer = 368))
# Data structure for the prices per hour
mutable struct SpotPrices
hour :: Array{Float64,1}
end
# Fill-out data structure
function setup_prices(df::DataFrame)
prices = []
for i in 1:length(unique(df[:,3]))
push!(prices, SpotPrices(filter(row -> row.hour == i, df).SpotPrice))
end
return prices
end
prices = setup_prices(df)
# Sampler function
function MC_Sampler(prices::Vector{Any}, sample_size::Int64)
# Picking the samples
tmp = zeros(sample_size, 24)
# Sampling per hour
for i in 1:24
tmp[:,i] = rand(prices[i].hour, sample_size)
end
return tmp
end
samples = MC_Sampler(prices, 100)
#btime setup_prices(df)
#btime MC_Sampler(prices,100)
function setup_prices_par(df::DataFrame)
prices = []
#sync Threads.#threads for i in 1:length(unique(df[:,3]))
push!(prices, SpotPrices(filter(row -> row.hour == i, df).SpotPrice))
end
return prices
end
# Sampler function
function MC_Sampler_par(prices::Vector{Any}, sample_size::Int64)
# Picking the samples
tmp = zeros(sample_size, 24)
# Sampling per hour
#sync Threads.#threads for i in 1:24
tmp[:,i] = rand(prices[i].hour, sample_size)
end
return tmp
end
#btime setup_prices_par(df)
#btime MC_Sampler_par(prices,100)
Have a look at read very carefully https://docs.julialang.org/en/v1/manual/performance-tips/
Basic cleanups start with:
Your SpotPrices struct does not need to me mutable. Anyway since there is only one field you could just define it as SpotPrices=Vector{Float64}
You do not want untyped containers - instead of prices = [] do prices = Float64[]
Using DataFrames.groupby will be much faster than finding unique elements and filtering by them
If yo do not need initialze than do not do it Vector{Float64}(undef, sample_size) is much faster than zeros(sample_size, 24)
You do not need to synchronize #sync before a multi-threaded loop
Create a random states - one separate one for each thread and use them whenever calling the rand function

Parallel Computation of Elements of an Array in Node.JS

I have an array with at least 360 numbers and one function func which I want to call upon each one of them. Because the function takes some time to calculate, I'm looking for a way to parallelize these tasks. Because I have almost no experience with Node.JS, I am looking for some help on how to achieve this.
That is what's given
var geometrical_form = new paper.CompoundPath('...');
var angles = [0, ..., 360];
What I want is that the following functions are called for each angle
geometrical_form.rotate(angle[i]);
func(geometrical_form);

Node: Generate 6 digits random number using crypto.randomBytes

What is the correct way to generate exact value from 0 to 999999 randomly since 1000000 is not a power of 2?
This is my approach:
use crypto.randomBytes to generate 3 bytes and convert to hex
use the first 5 characters to convert to integer (max is fffff == 1048575 > 999999)
if the result > 999999, start from step 1 again
It will somehow create a recursive function. Is it logically correct and will it cause a concern of performance?
There are several way to extract random numbers in a range from random bits. Some common ones are described in NIST Special Publication 800-90A revision 1: Recommendation for Random Number Generation Using Deterministic Random Bit Generators
Although this standard is about deterministic random bit generations there is a helpful appendix called A.5 Converting Random Bits into a Random Number which describes three useful methods.
The methods described are:
A.5.1 The Simple Discard Method
A.5.2 The Complex Discard Method
A.5.3 The Simple Modular Method
The first two of them are not deterministic with regards to running time but generate a number with no bias at all. They are based on rejection sampling.
The complex discard method discusses a more optimal scheme for generating large quantities of random numbers in a range. I think it is too complex for almost any normal use; I would look at the Optimized Simple Discard method described below if you require additional efficiency instead.
The Simple Modular Method is time constant and deterministic but has non-zero (but negligible) bias. It requires a relatively large amount of additional randomness to achieve the negligible bias though; basically to have a bias of one out of 2^128 you need 128 bits on top of the bit size of the range required. This is probably not the method to choose for smaller numbers.
Your algorithm is clearly a version of the Simple Discard Method (more generally called "rejection sampling"), so it is fine.
I've myself thought of a very efficient algorithm based on the Simple Discard Method called the "Optimized Simple Discard Method" or RNG-BC where "BC" stands for "binary compare". It is based on the observation that comparison only looks at the most significant bits, which means that the least significant bits should still be considered random and can therefore be reused. Beware that this method has not been officially peer reviewed; I do present an informal proof of equivalence with the Simple Discard Method.
Of course you should rather use a generic method that is efficient given any value of N. In that case the Complex Discard Method or Simple Modular Method should be considered over the Simple Discard Method. There are other, much more complex algorithms that are even more efficient, but generally you're fine when using either of these two.
Note that it is often beneficial to first check if N is a power of two when generating a random in the range [0, N). If N is a power of two then there is no need to use any of these possibly expensive computations; just use the bits you need from the random bit or byte generator.
It's a correct algorithm (https://en.wikipedia.org/wiki/Rejection_sampling), though you could consider using bitwise operations instead of converting to hex. It can run forever if the random number generator is malfunctioning -- you could consider trying a fixed number of times and then throwing an exception instead of looping forever.
The main possible performance problem is that on some platforms, crypto.randomBytes can block if it runs out of entropy. So you don't want to waste any randomness if you're using it.
Therefore instead of your string comparison I would use the following integer operation.
if (random_bytes < 16700000) {
return random_bytes = random_bytes - 100000 * Math.floor(random_bytes/100000);
}
This has about a 99.54% chance of producing an answer from the first 3 bytes, as opposed to around 76% odds with your approach.
I would suggest the following approach:
private generateCode(): string {
let code: string = "";
do {
code += randomBytes(3).readUIntBE(0, 3);
// code += Number.parseInt(randomBytes(3).toString("hex"), 16);
} while (code.length < 6);
return code.slice(0, 6);
}
This returns the numeric code as string, but if it is necessary to get it as a number, then change to return Number.parseInt(code.slice(0, 6))
I call it the random_6d algo. Worst case just a single additional loop.
var random_6d = function(n2){
var n1 = crypto.randomBytes(3).readUIntLE(0, 3) >>> 4;
if(n1 < 1000000)
return n1;
if(typeof n2 === 'undefined')
return random_6d(n1);
return Math.abs(n1 - n2);
};
loop version:
var random_6d = function(){
var n1, n2;
while(true){
n1 = crypto.randomBytes(3).readUIntLE(0, 3) >>> 4;
if(n1 < 1000000)
return n1;
if(typeof n2 === 'undefined')
n2 = n1;
else
return Math.abs(n1 - n2);
};
};

Parallelization of Piecewise Polynomial Evaluation

I am trying to evaluate points in a large piecewise polynomial, which is obtained from a cubic-spline. This takes a long time to do and I would like to speed it up.
As such, I would like to evaluate a points on a piecewise polynomial with parallel processes, rather than sequentially.
Code:
z = zeros(1e6, 1) ; % preallocate some memory for speed
Y = rand(11220,161) ; %some data, rand for generating a working example
X = 0 : 0.0125 : 2 ; % vector of data sites
pp = spline(X, Y) ; % get the piecewise polynomial form of the cubic spline.
The resulting structure is large.
for t = 1 : 1e6 % big number
hcurrent = ppval(pp,t); %evaluate the piecewise polynomial at t
z(t) = sum(x(t:t+M-1).*hcurrent,1) ; % do some operation of the interpolated value. Most likely not relevant to this question.
end
Unfortunately, with matrix form and using:
hcurrent = flipud(ppval(pp, 1: 1e6 ))
requires too much memory to process, so cannot be done. Is there a way that I can batch process this code to speed it up?
For scalar second arguments, as in your example, you're dealing with two issues. First, there's a good amount of function call overhead and redundant computation (e.g., unmkpp(pp) is called every loop iteration). Second, ppval is written to be general so it's not fully vectorized and does a lot of things that aren't necessary in your case.
Below is vectorized code code that take advantage of some of the structure of your problem (e.g., t is an integer greater than 0), avoids function call overhead, move some calculations outside of your main for loop (at the cost of a bit of extra memory), and gets rid of a for loop inside of ppval:
n = 1e6;
z = zeros(n,1);
X = 0:0.0125:2;
Y = rand(11220,numel(X));
pp = spline(X,Y);
[b,c,l,k,dd] = unmkpp(pp);
T = 1:n;
idx = discretize(T,[-Inf b(2:l) Inf]); % Or: [~,idx] = histc(T,[-Inf b(2:l) Inf]);
x = bsxfun(#power,T-b(idx),(k-1:-1:0).').';
idx = dd*idx;
d = 1-dd:0;
for t = T
hcurrent = sum(bsxfun(#times,c(idx(t)+d,:),x(t,:)),2);
z(t) = ...;
end
The resultant code takes ~34% of the time of your example for n=1e6. Note that because of the vectorization, calculations are performed in a different order. This will result in slight differences between outputs from ppval and my optimized version due to the nature of floating point math. Any differences should be on the order of a few times eps(hcurrent). You can still try using parfor to further speed up the calculation (with four already running workers, my system took just 12% of your code's original time).
I consider the above a proof of concept. I may have over-optmized the code above if your example doesn't correspond well to your actual code and data. In that case, I suggest creating your own optimized version. You can start by looking at the code for ppval by typing edit ppval in your Command Window. You may be able to implement further optimizations by looking at the structure of your problem and what you ultimately want in your z vector.
Internally, ppval still uses histc, which has been deprecated. My code above uses discretize to perform the same task, as suggested by the documentation.
Use parfor command for parallel loops. see here, also precompute z vector as z(j) = x(j:j+M-1) and hcurrent in parfor for speed up.
The Spline Parameters estimation can be written in Matrix form.
Once you write it in Matrix form and solve it you can use the Model Matrix to evaluate the Spline on all data point using Matrix Multiplication which is probably the most tuned operation in MATLAB.

Parallel For to sum an array of ushorts (big array 18M)

I would like to use parallel processing for taking array statistics for large arrays of unsigned short (16 bit) values.
ushort[] array = new ushort[2560 * 3072]; // x = rows(2560) y = columns(3072)
double avg = Parallel.For (0, array.Length, WHAT GOES HERE);
The same for standard deviation & standard deviation of row means.
I have normal for loop versions of these functions and they take too long when combined with Median Filter methods.
The end product is to try and get a Median Filter for the array. But the first steps are important as well. So if you have the whole solution great but if you want to help with the first parts as well it is all appreciated.
Have you tried PLINQ?
double average = array.AsParallel().Average(n => n);
I'm not sure how performant it will be with a large array of ushort values, but it's worth testing to see if it meets your needs.

Resources