Parallel Computation of Elements of an Array in Node.JS - node.js

I have an array with at least 360 numbers and one function func which I want to call upon each one of them. Because the function takes some time to calculate, I'm looking for a way to parallelize these tasks. Because I have almost no experience with Node.JS, I am looking for some help on how to achieve this.
That is what's given
var geometrical_form = new paper.CompoundPath('...');
var angles = [0, ..., 360];
What I want is that the following functions are called for each angle
geometrical_form.rotate(angle[i]);
func(geometrical_form);

Related

Large number division is too slow using bignumber.js in node.js

I am using this library https://www.npmjs.com/package/big-number to perform division of two large numbers:
My function has the following code:
var x = new BigNumber(val);
var y = new BigNumber(100000000);
return x.dividedBy(y).toNumber();
This code is called 100 times on my machine and it takes about 10 seconds for it to execute. It runs much faster on another machine, however we have limited resources in the cloud and I would want to optimize this.
What can I do to optimize this?
I am using the classical for loop to do the 100 iterations.
Assuming you are working with integers, there is a built-in BigInt type in JavaScript which will give you the best performance:
let x = BigInt(val);
let y = 100000000n; // BigInt literals end in "n"
return Number(x / y);

Parallelization of Piecewise Polynomial Evaluation

I am trying to evaluate points in a large piecewise polynomial, which is obtained from a cubic-spline. This takes a long time to do and I would like to speed it up.
As such, I would like to evaluate a points on a piecewise polynomial with parallel processes, rather than sequentially.
Code:
z = zeros(1e6, 1) ; % preallocate some memory for speed
Y = rand(11220,161) ; %some data, rand for generating a working example
X = 0 : 0.0125 : 2 ; % vector of data sites
pp = spline(X, Y) ; % get the piecewise polynomial form of the cubic spline.
The resulting structure is large.
for t = 1 : 1e6 % big number
hcurrent = ppval(pp,t); %evaluate the piecewise polynomial at t
z(t) = sum(x(t:t+M-1).*hcurrent,1) ; % do some operation of the interpolated value. Most likely not relevant to this question.
end
Unfortunately, with matrix form and using:
hcurrent = flipud(ppval(pp, 1: 1e6 ))
requires too much memory to process, so cannot be done. Is there a way that I can batch process this code to speed it up?
For scalar second arguments, as in your example, you're dealing with two issues. First, there's a good amount of function call overhead and redundant computation (e.g., unmkpp(pp) is called every loop iteration). Second, ppval is written to be general so it's not fully vectorized and does a lot of things that aren't necessary in your case.
Below is vectorized code code that take advantage of some of the structure of your problem (e.g., t is an integer greater than 0), avoids function call overhead, move some calculations outside of your main for loop (at the cost of a bit of extra memory), and gets rid of a for loop inside of ppval:
n = 1e6;
z = zeros(n,1);
X = 0:0.0125:2;
Y = rand(11220,numel(X));
pp = spline(X,Y);
[b,c,l,k,dd] = unmkpp(pp);
T = 1:n;
idx = discretize(T,[-Inf b(2:l) Inf]); % Or: [~,idx] = histc(T,[-Inf b(2:l) Inf]);
x = bsxfun(#power,T-b(idx),(k-1:-1:0).').';
idx = dd*idx;
d = 1-dd:0;
for t = T
hcurrent = sum(bsxfun(#times,c(idx(t)+d,:),x(t,:)),2);
z(t) = ...;
end
The resultant code takes ~34% of the time of your example for n=1e6. Note that because of the vectorization, calculations are performed in a different order. This will result in slight differences between outputs from ppval and my optimized version due to the nature of floating point math. Any differences should be on the order of a few times eps(hcurrent). You can still try using parfor to further speed up the calculation (with four already running workers, my system took just 12% of your code's original time).
I consider the above a proof of concept. I may have over-optmized the code above if your example doesn't correspond well to your actual code and data. In that case, I suggest creating your own optimized version. You can start by looking at the code for ppval by typing edit ppval in your Command Window. You may be able to implement further optimizations by looking at the structure of your problem and what you ultimately want in your z vector.
Internally, ppval still uses histc, which has been deprecated. My code above uses discretize to perform the same task, as suggested by the documentation.
Use parfor command for parallel loops. see here, also precompute z vector as z(j) = x(j:j+M-1) and hcurrent in parfor for speed up.
The Spline Parameters estimation can be written in Matrix form.
Once you write it in Matrix form and solve it you can use the Model Matrix to evaluate the Spline on all data point using Matrix Multiplication which is probably the most tuned operation in MATLAB.

Using Wolfram to Play Functions

As a blind person, I am curious as to whether or not I can use Wolfram to play functions. For example, if I were to plugg in y = x squared from -10 to 10, I would expect to hear a decreasing tone as the function flattens out, then a normal tone at the origin, then tones of increasing pitch as the function moves towards positive infinity.
Using the play function and sine you can create a function that does what you want mostly ( using amplitude instead of frequency).
sinPlay[f_, { start_, end_}, baseFreq_] := EmitSound[ Play[Sin[x *baseFreq]* f[x], {x,start,end}]]
This function maps the height of the function to amplitude. Note that because it scales sound to from silence to moderately loud, y=1 sounds the same as y=5, likewise y=2x sound the same as y=5x.
It is called like this (the x^2 function):
sinPlay[#*# &, { 0, 2}, 1000]
#*# & is an anonymous function (into to them) that takes one number and squares it. {0, 2} is the part of the function you want to listen to in seconds. So {0, 2} generates a two second clip.
This is the square root function:
sinPlay[Sqrt[#] &, { 0,10}, 1000]
And this is the sine function:
sinPlay[Sin[#] &, { 0,10}, 1000]
Note the silence is because those are bottoms of the sine function which have been scaled to silence.
Using Frequency Instead
In theory is would be possible to use frequency instead. The function would look something like this:
sinPlay[f_, { start_, end_}, baseFreq_] := EmitSound[ Play[Sin[x *baseFreq* f[x]], {x,start,end}]]
But then changes in frequency would also cause changes in time from the sine function. Perhaps something could be done using derivatives to fix this problem, but I have haven't worked it out. Wolfram supplies a function to calculate derivatives for you
You may use Play. However, you would not get much of a sound with that function. You should try a sine or cosine function to start.

Finding pairs closer than a given distance (proximity) in a set of points

I'm developping a multiplayer game with node.js. Every second I get the coordinates (X, Y, Z) of every player. How can I have, for each player a list of all players located closer than a given distance from him ?
Any idea to avoid a O(n²) calculation?
You are not looking for clustering algorithms.
Instead, you are looking for a database index that supports radius queries.
Examples:
R*-tree
kd-tree
M-tree
Gridfile
Octree (for 3d, quadtree for 2d)
Any of these should do the trick, and yield an O(n log n) performance theoretically. In practise, it's not as easy as this. If all your objects are really close, "closer than a given coordinate" may mean every object, i.e. O(n^2).
What you are looking for is a quadtree in 3 dimensions, i.e. an octree. An octree is basically the same as the binary tree, but instead of two children per node, it has 2^D = 2^3 = 8 children per node, where D is the dimension.
For example, imagine a cube. In order to create the next level of the root, you actually have every node representing the 8 sub-cubes inside the cube and so on.
This tree will yield fast lookups but careful not to use it for more dimensions. I had built a polymorphic quadtree and wouldn't go to more than 8-10 dimensions, because it was becoming too flat.
The other approach would be the kd-tree, where actually you halve the dataset (the players) at every step.
You could use a library that provides nearest neighbour searching.
I'm answering my own question because I have the answer now. Thanks to G. Samaras and Anony-Mousse:
I use a kd-tree algorithm:
First I build the tree with all the players
Then for each player I calculate the list of all the players within given range arround this player
This is very fast and easy with the npm module kdtree: https://www.npmjs.org/package/kdtree
var kd = require('kdtree');
var tree = new kd.KDTree(3); // A new tree for 3-dimensional points
var players = loadPlayersPosition(); // players is an array containing all the positions
for (var p in players){ //let's build the tree
tree.insert(players[p].x, players[p].y, players[p].z, players[p].username);
}
nearest = [];
for (var p in players){ //let's look for neighboors
var RANGE = 1000; //1km range
close = tree.nearestRange(players[p].x, players[p].y, players[p].z, RANGE);
nearest.push(close);
}
It returns nearest that is an array conataining for each player all his neighboors within a range of 1000m. I made some tests on my PC with 100,000 simulated players. It takes only 500 ms to build the tree and another 500 ms to find the nearest neigboors pairs. I find it very fast for such a big number of players.
bonus: if you need to do this with latitude and longitude instead of x, y, z, just convert lat, lon to cartesian x, y z, because for short distances chord distance on a sphere ~ great circle distance

Is it possible to do an algebraic curve fit with just a single pass of the sample data?

I would like to do an algebraic curve fit of 2D data points, but for various reasons - it isn't really possible to have much of the sample data in memory at once, and iterating through all of it is an expensive process.
(The reason for this is that actually I need to fit thousands of curves simultaneously based on gigabytes of data which I'm reading off disk, and which is therefore sloooooow).
Note that the number of polynomial coefficients will be limited (perhaps 5-10), so an exact fit will be extremely unlikely, but this is ok as I'm trying to find an underlying pattern in data with a lot of random noise.
I understand how one can use a genetic algorithm to fit a curve to a dataset, but this requires many passes through the sample data, and thus isn't practical for my application.
Is there a way to fit a curve with a single pass of the data, where the state that must be maintained from sample to sample is minimal?
I should add that the nature of the data is that the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
So, in Java, I'm looking for a class with the following interface:
public interface CurveFit {
public void addData(double x, double y);
public List<Double> getBestFit(); // Returns the polynomial coefficients
}
The class that implements this must not need to keep much data in its instance fields, no more than a kilobyte even for millions of data points. This means that you can't just store the data as you get it to do multiple passes through it later.
edit: Some have suggested that finding an optimal curve in a single pass may be impossible, however an optimal fit is not required, just as close as we can get it in a single pass.
The bare bones of an approach might be if we have a way to start with a curve, and then a way to modify it to get it slightly closer to new data points as they come in - effectively a form of gradient descent. It is hoped that with sufficient data (and the data will be plentiful), we get a pretty good curve. Perhaps this inspires someone to a solution.
Yes, it is a projection. For
y = X beta + error
where lowercased terms are vectors, and X is a matrix, you have the solution vector
\hat{beta} = inverse(X'X) X' y
as per the OLS page. You almost never want to compute this directly but rather use LR, QR or SVD decompositions. References are plentiful in the statistics literature.
If your problem has only one parameter (and x is hence a vector as well) then this reduces to just summation of cross-products between y and x.
If you don't mind that you'll get a straight line "curve", then you only need six variables for any amount of data. Here's the source code that's going into my upcoming book; I'm sure that you can figure out how the DataPoint class works:
Interpolation.h:
#ifndef __INTERPOLATION_H
#define __INTERPOLATION_H
#include "DataPoint.h"
class Interpolation
{
private:
int m_count;
double m_sumX;
double m_sumXX; /* sum of X*X */
double m_sumXY; /* sum of X*Y */
double m_sumY;
double m_sumYY; /* sum of Y*Y */
public:
Interpolation();
void addData(const DataPoint& dp);
double slope() const;
double intercept() const;
double interpolate(double x) const;
double correlate() const;
};
#endif // __INTERPOLATION_H
Interpolation.cpp:
#include <cmath>
#include "Interpolation.h"
Interpolation::Interpolation()
{
m_count = 0;
m_sumX = 0.0;
m_sumXX = 0.0;
m_sumXY = 0.0;
m_sumY = 0.0;
m_sumYY = 0.0;
}
void Interpolation::addData(const DataPoint& dp)
{
m_count++;
m_sumX += dp.getX();
m_sumXX += dp.getX() * dp.getX();
m_sumXY += dp.getX() * dp.getY();
m_sumY += dp.getY();
m_sumYY += dp.getY() * dp.getY();
}
double Interpolation::slope() const
{
return (m_sumXY - (m_sumX * m_sumY / m_count)) /
(m_sumXX - (m_sumX * m_sumX / m_count));
}
double Interpolation::intercept() const
{
return (m_sumY / m_count) - slope() * (m_sumX / m_count);
}
double Interpolation::interpolate(double X) const
{
return intercept() + slope() * X;
}
double Interpolation::correlate() const
{
return m_sumXY / sqrt(m_sumXX * m_sumYY);
}
Why not use a ring buffer of some fixed size (say, the last 1000 points) and do a standard QR decomposition-based least squares fit to the buffered data? Once the buffer fills, each time you get a new point you replace the oldest and re-fit. That way you have a bounded working set that still has some data locality, without all the challenges of live stream (memoryless) processing.
Are you limiting the number of polynomial coefficients (i.e. fitting to a max power of x in your polynomial)?
If not, then you don't need a "best fit" algorithm - you can always fit N data points EXACTLY to a polynomial of N coefficients.
Just use matrices to solve N simultaneous equations for N unknowns (the N coefficients of the polynomial).
If you are limiting to a max number of coefficients, what is your max?
Following your comments and edit:
What you want is a low-pass filter to filter out noise, not fit a polynomial to the noise.
Given the nature of your data:
the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
Then you don't need even a single pass, as these two lines will pass exactly through every point:
X = [0.0 ... 1.0], Y = 0.0
X = [0.0 ... 1.0], Y = 1.0
Two short line segments, unit length, and every point falls on one line or the other.
Admittedly, an algorithm to find a good curve fit for arbitrary points in a single pass is interesting, but (based on your question), that's not what you need.
Assuming that you don't know which point should belong to which curve, something like a Hough Transform might provide what you need.
The Hough Transform is a technique that allows you to identify structure within a data set. One use is for computer vision, where it allows easy identification of lines and borders within the field of sight.
Advantages for this situation:
Each point need be considered only once
You don't need to keep a data structure for each candidate line, just one (complex, multi-dimensional) structure
Processing of each line is simple
You can stop at any point and output a set of good matches
You never discard any data, so it's not reliant on any accidental locality of references
You can trade off between accuracy and memory requirements
Isn't limited to exact matches, but will highlight partial matches too.
An approach
To find cubic fits, you'd construct a 4-dimensional Hough space, into which you'd project each of your data-points. Hotspots within Hough space would give you the parameters for the cubic through those points.
You need the solution to an overdetermined linear system. The popular methods are Normal Equations (not usually recommended), QR factorization, and singular value decomposition (SVD). Wikipedia has decent explanations, Trefethen and Bau is very good. Your options:
Out-of-core implementation via the normal equations. This requires the product A'A where A has many more rows than columns (so the result is very small). The matrix A is completely defined by the sample locations so you don't have to store it, thus computing A'A is reasonably cheap (very cheap if you don't need to hit memory for the node locations). Once A'A is computed, you get the solution in one pass through your input data, but the method can be unstable.
Implement an out-of-core QR factorization. Classical Gram-Schmidt will be fastest, but you have to be careful about stability.
Do it in-core with distributed memory (if you have the hardware available). Libraries like PLAPACK and SCALAPACK can do this, the performance should be much better than 1. The parallel scalability is not fantastic, but will be fine if it's a problem size that you would even think about doing in serial.
Use iterative methods to compute an SVD. Depending on the spectral properties of your system (maybe after preconditioning) this could converge very fast and does not require storage for the matrix (which in your case has 5-10 columns each of which are the size of your input data. A good library for this is SLEPc, you only have to find a the product of the Vandermonde matrix with a vector (so you only need to store the sample locations). This is very scalable in parallel.
I believe I found the answer to my own question based on a modified version of this code. For those interested, my Java code is here.

Resources