Math.Net Exponential Moving Average - math.net

I'm using simple moving average in Math.Net, but now that I also need to calculate EMA (exponential moving average) or any kind of weighted moving average, I don't find it in the library.
I looked over all methods under MathNet.Numerics.Statistics and beyond, but didn't find anything similar.
Is it missing in library or I need to reference some additional package?

I don't see any EMA in MathNet.Numerics, however it's trivial to program. The routine below is based on the definition at Investopedia.
public double[] EMA(double[] x, int N)
{
// x is the input series
// N is the notional age of the data used
// k is the smoothing constant
double k = 2.0 / (N + 1);
double[] y = new double[x.Length];
y[0] = x[0];
for (int i = 1; i < x.Length; i++) y[i] = k * x[i] + (1 - k) * y[i - 1];
return y;
}

Occasionally I found this package: https://daveskender.github.io/Stock.Indicators/docs/INDICATORS.html It targets to the latest .NET framework and has very detailed documents.

Try this:
public IEnumerable<double> EMA(IEnumerable<double> items, int notationalAge)
{
double k = 2.0d / (notationalAge + 1), prev = 0.0d;
var e = items.GetEnumerator();
if (!e.MoveNext()) yield break;
yield return prev = e.Current;
while(e.MoveNext())
{
yield return prev = (k * e.Current) + (1 - k) * prev;
}
}
It will still work with arrays, but also List, Queue, Stack, IReadOnlyCollection, etc.
Although it's not explicitly stated I also get the sense this is working with money, in which case it really ought to use decimal instead of double.

Related

Open Scene Graph - Usage of DrawElementsUInt: Drawing a cloth without duplicating vertices

I am currently working on simulating a cloth like material and then displaying the results via Open Scene Graph.
I've gotten the setup to display something cloth like, by just dumping all the vertices into 1 Vec3Array and then displaying them with a standard Point based DrawArrays. However I am looking into adding the faces between the vertices so that a further part of my application can visually see the cloth.
This is currently what I am attempting as for the PrimitiveSet
// create and add a DrawArray Primitive (see include/osg/Primitive). The first
// parameter passed to the DrawArrays constructor is the Primitive::Mode which
// in this case is POINTS (which has the same value GL_POINTS), the second
// parameter is the index position into the vertex array of the first point
// to draw, and the third parameter is the number of points to draw.
unsigned int k = CLOTH_SIZE_X;
unsigned int n = CLOTH_SIZE_Y;
osg::ref_ptr<osg::DrawElementsUInt> indices = new osg::DrawElementsUInt(GL_QUADS, (k) * (n));
for (uint y_i = 0; y_i < n - 1; y_i++) {
for (uint x_i = 0; x_i < k - 1; x_i++) {
(*indices)[y_i * k + x_i] = y_i * k + x_i;
(*indices)[y_i * (k + 1) + x_i] = y_i * (k + 1) + x_i;
(*indices)[y_i * (k + 1) + x_i + 1] = y_i * (k + 1) + x_i + 1;
(*indices)[y_i * k + x_i] = y_i * k + x_i + 1;
}
}
geom->addPrimitiveSet(indices.get());
This does however cause memory corruption when running, and I am not fluent enough in Assembly code to decipher what it is trying to do wrong when CLion gives me the disassembled code.
My thought was that I would iterate over each of the faces of my cloth and then select the 4 indices of the vertices that belong to it. The vertices are inputted from top left to bottom right in order. So:
0 1 2 3 ... k-1
k k+1 k+2 k+3 ... 2k-1
2k 2k+1 2k+2 2k+3 ... 3k-1
...
Has anyone come across this specific use-case before and does he/she perhaps have a solution for my problem? Any help would be greatly appreciated.
You might want to look into using DrawArrays with QUAD_STRIP (or TRIANGLE_STRIP because quads are frowned upon these days). There's an example here:
http://openscenegraph.sourceforge.net/documentation/OpenSceneGraph/examples/osggeometry/osggeometry.cpp
It's slightly less efficient than Elements/indices, but it's also less complicated to manage the relationship between the two related containers (the vertices and the indices).
If you really want to do the Elements/indices route, we'd probably need to see more repro code to see what's going on.

How to calculate a geometric cross field inside an arbitrary polygon?

I'm having troubles finding a way to calculate a "cross-field" inside an arbitrary polygon.
A Cross field, as defined by one paper is the smoothest field that is tangential to the domain boundary (in this case the polygon) I find it a lot in quad re-topology papers but surprisingly not even in Wikipedia I can find the definition of a Cross field.
I have images but since I'm new here the system said I need at least 10 reputation points to upload images.
Any ideas?
I think it could be something along the lines of an interpolation? given an inner point determine the distance to each edge and integrate or weight sum the tangent and perpendicular vector of every edge by the distance? (or any other factor in fact)
But other simpler approaches may exist?
Thanks in advance!
//I've come up with something like this (for the 3D case), very raw, educational purposes
float ditance2segment(Vector3D p, Vector3D p0, Vector3D p1){
Vector3D v = p1 - p0;
Vector3D w = p - p0;
float c1 = v.Dot(w);
if (c1 <= 0)
return (p - p1).Length();
float c2 = v.Dot(v);
if (c2 <= c1)
return (p - p1).Length();
float b = c1 / c2;
Vector3D pb = p0 + b*v;
return (p - pb).Length();
}
void CrossFieldInterpolation(List<Vector3D>& Contour, List<Vector3D>& ContourN, Vector3D p, Vector3D& crossU, Vector3D& crossV){
int N = Contour.Amount();
for (int i=0; i < N; i++){
Vector3D u = Contour[(i + 1) % N] - Contour[i];
Vector3D n = 0.5*(ContourN[(i + 1) % N] + ContourN[i]);
Vector3D v = -Vector3D::Cross(u,n); //perpendicular vector
u = Vector3D::Normalize(u);
n = Vector3D::Normalize(n);
v = Vector3D::Normalize(v);
float dist = ditance2segment(p, Contour[i], Contour[(i+1)%N]);
crossU += u / (1+dist); //to avoid infinity at points over the segment
crossV += v / (1+dist);
}
crossU = Vector3D::Normalize(crossU);
crossV = Vector3D::Normalize(crossV);
}
You can check the OpenSource Graphite software that I'm developping, it implements the "Periodic Global Parameterization" algorithm [1] that was developed in my research team. You may be also interested in the following research articles with algorithms that we developed more recently [2],[3]
Graphite website:
http://alice.loria.fr/software/graphite
How to use Periodic Global Parameterization:
http://alice.loria.fr/WIKI/index.php/Graphite/PGP
[1] http://alice.loria.fr/index.php/publications.html?Paper=TOG_pgp%402006
[2] http://alice.loria.fr/index.php/publications.html?Paper=DGF#2008
[3] http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=DFD#2008&Author=vallet

maximum volume of a box with perimeter and area given

Here's the link to the question..
http://www.codechef.com/problems/J7
I figured out that 2 edges have to be equal in order to give the maximum volume, and then used x, x, a*x as the lengths of the three edges to write the equations -
4*x + 4*x + 4*a*x = P (perimeter) and,
2*x^2 + 4*(a*x *x) = S (total area of the box)
so from the first equation I got x in terms of P and a, and then substituted it in the second equation and then got a quadratic equation with the unknown being a. and then I used the greater root of a and got x.
But this method seems to be giving the wrong answer! :|
I know that there isn't any logical error in this. Maybe some formatting error?
Here's the main code that I've written :
{
public static void main(String[] args)
{
TheBestBox box = new TheBestBox();
reader = box.new InputReader(System.in);
writer = box.new OutputWriter(System.out);
getAttributes();
writer.flush();
reader.close();
writer.close();
}
public static void getAttributes()
{
t = reader.nextInt(); // t is the number of test cases in the question
for (int i = 0; i < t; i++)
{
p = reader.nextInt(); // p is the perimeter given as input
area = reader.nextInt(); // area of the whole sheet, given as input
a = findRoot(); // the fraction by which the third side differs by the first two
side = (double) p / (4 * (2 + a)); // length of the first and the second sides (equal)
height = a * side; // assuming that the base is a square, the height has to be the side which differs from the other two
// writer.println(side * side * height);
// System.out.printf("%.2f\n", (side * side * height));
writer.println(String.format("%.2f", (side * side * height))); // just printing out the final answer
}
}
public static double findRoot() // the method to find the 2 possible fractions by which the height can differ from the other two sides and return the bigger one of them
{
double a32, b, discriminant, root1, root2;
a32 = 32 * area - p * p;
b = 32 * area - 2 * p * p;
discriminant = Math.sqrt(b * b - 4 * 8 * area * a32);
double temp;
temp = 2 * 8 * area;
root1 = (- b + discriminant) / temp;
root2 = (- b - discriminant) / temp;
return Math.max(root1, root2);
}
}
could someone please help me out with this? Thank You. :)
I also got stuck in this question and realized that can be done by making equation of V(volume) in terms of one side say 'l' and using differentiation to find maximum volume in terms of any one side 'l'.
So, equations are like this :-
P = 4(l+b+h);
S = 2(l*b+b*h+l*h);
V = l*b*h;
so equation in l for V = (l^3) - (l^2)P/4 + lS/2 -------equ(1)
After differentiation we get:-
d(V)/d(l) = 3*(l^2) - l*P/2 + S/2;
to get max V we need to equate above equation to zero(0) and get the value of l.
So, solutions to a quadratic equation will be:-
l = ( P + sqrt((P^2)-24S) ) / 24;
so substitute this l in equation(1) to get max volume.

Asymmetric Levenshtein distance

Given two bit strings, x and y, with x longer than y, I'd like to compute a kind of asymmetric variant of the Levensthein distance between them. Starting with x, I'd like to know the minimum number of deletions and substitutions it takes to turn x into y.
Can I just use the usual Levensthein distance for this, or do I need I need to modify the algorithm somehow? In other words, with the usual set of edits of deletion, substitution, and addition, is it ever beneficial to delete more than the difference in lengths between the two strings and then add some bits back? I suspect the answer is no, but I'm not sure. If I'm wrong, and I do need to modify the definition of Levenshtein distance to disallow deletions, how do I do so?
Finally, I would expect intuitively that I'd get the same distance if I started with y (the shorter string) and only allowed additions and substitutions. Is this right? I've got a sense for what these answers are, I just can't prove them.
If i understand you correctly, I think the answer is yes, the Levenshtein edit distance could be different than an algorithm that only allows deletions and substitutions to the larger string. Because of this, you would need to modify, or create a different algorithm to get your limited version.
Consider the two strings "ABCD" and "ACDEF". The Levenshtein distance is 3 (ABCD->ACD->ACDE->ACDEF). If we start with the longer string, and limit ourselves to deletions and substitutions we must use 4 edits (1 deletion and 3 substitutions. The reason is that strings where deletions are applied to the smaller string to efficiently get to the larger string can't be achieved when starting with the longer string, because it does not have the complimentary insertion operation (since you're disallowing that).
Your last paragraph is true. If the path from shorter to longer uses only insertions and substitutions, then any allowed path can simply be reversed from the longer to the shorter. Substitutions are the same regardless of direction, but the inserts when going from small to large become deletions when reversed.
I haven't tested this thoroughly, but this modification shows the direction I would take, and appears to work with the values I've tested with it. It's written in c#, and follows the psuedo code in the wikipedia entry for Levenshtein distance. There are obvious optimizations that can be made, but I refrained from doing that so it was more obvious what changes I've made from the standard algorithm. An important observation is that (using your constraints) if the strings are the same length, then substitution is the only operation allowed.
static int LevenshteinDistance(string s, string t) {
int i, j;
int m = s.Length;
int n = t.Length;
// for all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t;
// note that d has (m+1)*(n+1) values
var d = new int[m + 1, n + 1];
// set each element to zero
// c# creates array already initialized to zero
// source prefixes can be transformed into empty string by
// dropping all characters
for (i = 0; i <= m; i++) d[i, 0] = i;
// target prefixes can be reached from empty source prefix
// by inserting every character
for (j = 0; j <= n; j++) d[0, j] = j;
for (j = 1; j <= n; j++) {
for (i = 1; i <= m; i++) {
if (s[i - 1] == t[j - 1])
d[i, j] = d[i - 1, j - 1]; // no operation required
else {
int del = d[i - 1, j] + 1; // a deletion
int ins = d[i, j - 1] + 1; // an insertion
int sub = d[i - 1, j - 1] + 1; // a substitution
// the next two lines are the modification I've made
//int insDel = (i < j) ? ins : del;
//d[i, j] = (i == j) ? sub : Math.Min(insDel, sub);
// the following 8 lines are a clearer version of the above 2 lines
if (i == j) {
d[i, j] = sub;
} else {
int insDel;
if (i < j) insDel = ins; else insDel = del;
// assign the smaller of insDel or sub
d[i, j] = Math.Min(insDel, sub);
}
}
}
}
return d[m, n];
}

Finding the local maxima/peaks and minima/valleys of histograms

Ok, so I have a histogram (represented by an array of ints), and I'm looking for the best way to find local maxima and minima. Each histogram should have 3 peaks, one of them (the first one) probably much higher than the others.
I want to do several things:
Find the first "valley" following the first peak (in order to get rid of the first peak altogether in the picture)
Find the optimum "valley" value in between the remaining two peaks to separate the picture
I already know how to do step 2 by implementing a variant of Otsu.
But I'm struggling with step 1
In case the valley in between the two remaining peaks is not low enough, I'd like to give a warning.
Also, the image is quite clean with little noise to account for
What would be the brute-force algorithms to do steps 1 and 3? I could find a way to implement Otsu, but the brute-force is escaping me, math-wise. As it turns out, there is more documentation on doing methods like otsu, and less on simply finding peaks and valleys. I am not looking for anything more than whatever gets the job done (i.e. it's a temporary solution, just has to be implementable in a reasonable timeframe, until I can spend more time on it)
I am doing all this in c#
Any help on which steps to take would be appreciated!
Thank you so much!
EDIT: some more data:
most histogram are likely to be like the first one, with the first peak representing background.
Use peakiness-test. It's a method to find all the possible peak between two local minima, and measure the peakiness based on a formula. If the peakiness higher than a threshold, the peak is accepted.
Source: UCF CV CAP5415 lecture 9 slides
Below is my code:
public static List<int> PeakinessTest(int[] histogram, double peakinessThres)
{
int j=0;
List<int> valleys = new List<int> ();
//The start of the valley
int vA = histogram[j];
int P = vA;
//The end of the valley
int vB = 0;
//The width of the valley, default width is 1
int W = 1;
//The sum of the pixels between vA and vB
int N = 0;
//The measure of the peaks peakiness
double peakiness=0.0;
int peak=0;
bool l = false;
try
{
while (j < 254)
{
l = false;
vA = histogram[j];
P = vA;
W = 1;
N = vA;
int i = j + 1;
//To find the peak
while (P < histogram[i])
{
P = histogram[i];
W++;
N += histogram[i];
i++;
}
//To find the border of the valley other side
peak = i - 1;
vB = histogram[i];
N += histogram[i];
i++;
W++;
l = true;
while (vB >= histogram[i])
{
vB = histogram[i];
W++;
N += histogram[i];
i++;
}
//Calculate peakiness
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres & !valleys.Contains(j))
{
//peaks.Add(peak);
valleys.Add(j);
valleys.Add(i - 1);
}
j = i - 1;
}
}
catch (Exception)
{
if (l)
{
vB = histogram[255];
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres)
valleys.Add(255);
//peaks.Add(255);
return valleys;
}
}
//if(!valleys.Contains(255))
// valleys.Add(255);
return valleys;
}

Resources