I am trying to trace quadratic bezier curves, placing "markers" at a given step length distance. Tried to do it a naive way:
const p = toPoint(map, points[section + 1]);
const p2 = toPoint(map, points[section]);
const {x: cx, y: cy} = toPoint(map, cp);
const ll1 = toLatLng(map, p),
ll2 = toLatLng(map, p2),
llc = toLatLng(map, { x: cx, y: cy });
const lineLength = quadraticBezierLength(
ll1.lat,
ll1.lng,
llc.lat,
llc.lng,
ll2.lat,
ll2.lng
);
for (let index = 0; index < Math.floor(lineLength / distance); index++) {
const t = distance / lineLength;
const markerPoint = getQuadraticPoint(
t * index,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const markerLatLng = toLatLng(map, markerPoint);
markers.push(markerLatLng);
}
This approach does not work since the correlation of a quadratic curve between t and L is not linear. I could not find a formula, that would give me a good approximation, so looking at solving this problem using numeric methods [Newton]. One simple option that I am considering is to split the curve into x [for instance 10] times more pieces than needed. After that, using the same quadraticBezierLength() function calculate the distance to each of those points. After this, chose the point so that the length is closest to the distance * index.
This however would be a huge overkill in terms of algorithm complexity. I could probably start comparing points for index + 1 from the subset after/without the point I selected already, thus skipping the beginning of the set. This would lower the complexity some, yet still very inefficient.
Any ideas and/or suggestions?
Ideally, I want a function that would take d - distance along the curve, p0, cp, p1 - three points defining a quadratic bezier curve and return an array of coordinates, implemented with the least complexity possible.
OK I found analytic formula for 2D quadratic bezier curve in here:
Calculate the length of a segment of a quadratic bezier
So the idea is simply binary search the parameter t until analytically obtained arclength matches wanted length...
C++ code:
//---------------------------------------------------------------------------
float x0,x1,x2,y0,y1,y2; // control points
float ax[3],ay[3]; // coefficients
//---------------------------------------------------------------------------
void get_xy(float &x,float &y,float t) // get point on curve from parameter t=<0,1>
{
float tt=t*t;
x=ax[0]+(ax[1]*t)+(ax[2]*tt);
y=ay[0]+(ay[1]*t)+(ay[2]*tt);
}
//---------------------------------------------------------------------------
float get_l_naive(float t) // get arclength from parameter t=<0,1>
{
// naive iteration
float x0,x1,y0,y1,dx,dy,l=0.0,dt=0.001;
get_xy(x1,y1,t);
for (int e=1;e;)
{
t-=dt; if (t<0.0){ e=0; t=0.0; }
x0=x1; y0=y1; get_xy(x1,y1,t);
dx=x1-x0; dy=y1-y0;
l+=sqrt((dx*dx)+(dy*dy));
}
return l;
}
//---------------------------------------------------------------------------
float get_l(float t) // get arclength from parameter t=<0,1>
{
// analytic fomula from: https://stackoverflow.com/a/11857788/2521214
float ax,ay,bx,by,A,B,C,b,c,u,k,cu,cb;
ax=x0-x1-x1+x2;
ay=y0-y1-y1+y2;
bx=x1+x1-x0-x0;
by=y1+y1-y0-y0;
A=4.0*((ax*ax)+(ay*ay));
B=4.0*((ax*bx)+(ay*by));
C= (bx*bx)+(by*by);
b=B/(2.0*A);
c=C/A;
u=t+b;
k=c-(b*b);
cu=sqrt((u*u)+k);
cb=sqrt((b*b)+k);
return 0.5*sqrt(A)*((u*cu)-(b*cb)+(k*log(fabs((u+cu))/(b+cb))));
}
//---------------------------------------------------------------------------
float get_t(float l0) // get parameter t=<0,1> from arclength
{
float t0,t,dt,l;
for (t=0.0,dt=0.5;dt>1e-10;dt*=0.5)
{
t0=t; t+=dt;
l=get_l(t);
if (l>l0) t=t0;
}
return t;
}
//---------------------------------------------------------------------------
void set_coef() // compute coefficients from control points
{
ax[0]= ( x0);
ax[1]= +(2.0*x1)-(2.0*x0);
ax[2]=( x2)-(2.0*x1)+( x0);
ay[0]= ( y0);
ay[1]= +(2.0*y1)-(2.0*y0);
ay[2]=( y2)-(2.0*y1)+( y0);
}
//---------------------------------------------------------------------------
Usage:
set control points x0,y0,...
then you can use t=get_t(wanted_arclength) freely
In case you want to use get_t_naive and or get_xy you have to call set_coef first
In case you want to tweak speed/accuracy you can play with the target accuracy of binsearch currently set to1e-10
Here optimized (merged get_l,get_t functions) version:
//---------------------------------------------------------------------------
float get_t(float l0) // get parameter t=<0,1> from arclength
{
float t0,t,dt,l;
float ax,ay,bx,by,A,B,C,b,c,u,k,cu,cb,cA;
// precompute get_l(t) constants
ax=x0-x1-x1+x2;
ay=y0-y1-y1+y2;
bx=x1+x1-x0-x0;
by=y1+y1-y0-y0;
A=4.0*((ax*ax)+(ay*ay));
B=4.0*((ax*bx)+(ay*by));
C= (bx*bx)+(by*by);
b=B/(2.0*A);
c=C/A;
k=c-(b*b);
cb=sqrt((b*b)+k);
cA=0.5*sqrt(A);
// bin search t so get_l == l0
for (t=0.0,dt=0.5;dt>1e-10;dt*=0.5)
{
t0=t; t+=dt;
// l=get_l(t);
u=t+b; cu=sqrt((u*u)+k);
l=cA*((u*cu)-(b*cb)+(k*log(fabs((u+cu))/(b+cb))));
if (l>l0) t=t0;
}
return t;
}
//---------------------------------------------------------------------------
For now, I came up with the below:
for (let index = 0; index < Math.floor(numFloat * times); index++) {
const t = distance / lineLength / times;
const l1 = toLatLng(map, p), lcp = toLatLng(map, new L.Point(cx, cy));
const lutPoint = getQuadraticPoint(
t * index,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const lutLatLng = toLatLng(map, lutPoint);
const length = quadraticBezierLength(l1.lat, l1.lng, lcp.lat, lcp.lng, lutLatLng.lat, lutLatLng.lng);
lut.push({t: t * index, length});
}
const lut1 = lut.filter(({length}) => !isNaN(length));
console.log('lookup table:', lut1);
for (let index = 0; index < Math.floor(numFloat); index++) {
const t = distance / lineLength;
// find t closest to distance * index
const markerT = lut1.reduce((a, b) => {
return a.t && Math.abs(b.length - distance * index) < Math.abs(a.length - distance * index) ? b.t : a.t || 0;
});
const markerPoint = getQuadraticPoint(
markerT,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const markerLatLng = toLatLng(map, markerPoint);
}
I think only that my Bezier curve length is not working as I expected.
function quadraticBezierLength(x1, y1, x2, y2, x3, y3) {
let a, b, c, d, e, u, a1, e1, c1, d1, u1, v1x, v1y;
v1x = x2 * 2;
v1y = y2 * 2;
d = x1 - v1x + x3;
d1 = y1 - v1y + y3;
e = v1x - 2 * x1;
e1 = v1y - 2 * y1;
c1 = a = 4 * (d * d + d1 * d1);
c1 += b = 4 * (d * e + d1 * e1);
c1 += c = e * e + e1 * e1;
c1 = 2 * Math.sqrt(c1);
a1 = 2 * a * (u = Math.sqrt(a));
u1 = b / u;
a = 4 * c * a - b * b;
c = 2 * Math.sqrt(c);
return (
(a1 * c1 + u * b * (c1 - c) + a * Math.log((2 * u + u1 + c1) / (u1 + c))) /
(4 * a1)
);
}
I believe that the full curve length is correct, but the partial length that is being calculated for the lookup table is wrong.
If I am right, you want points at equally spaced points in terms of curvilinear abscissa (rather than in terms of constant Euclidean distance, which would be a very different problem).
Computing the curvilinear abscissa s as a function of the curve parameter t is indeed an option, but that leads you to the resolution of the equation s(t) = Sk/n for integer k, where S is the total length (or s(t) = kd if a step is imposed). This is not convenient because s(t) is not available as a simple function and is transcendental.
A better method is to solve the differential equation
dt/ds = 1/(ds/dt) = 1/√(dx/dt)²+(dy/dt)²
using your preferred ODE solver (RK4). This lets you impose your fixed step on s and is computationally efficient.
I'm using simple moving average in Math.Net, but now that I also need to calculate EMA (exponential moving average) or any kind of weighted moving average, I don't find it in the library.
I looked over all methods under MathNet.Numerics.Statistics and beyond, but didn't find anything similar.
Is it missing in library or I need to reference some additional package?
I don't see any EMA in MathNet.Numerics, however it's trivial to program. The routine below is based on the definition at Investopedia.
public double[] EMA(double[] x, int N)
{
// x is the input series
// N is the notional age of the data used
// k is the smoothing constant
double k = 2.0 / (N + 1);
double[] y = new double[x.Length];
y[0] = x[0];
for (int i = 1; i < x.Length; i++) y[i] = k * x[i] + (1 - k) * y[i - 1];
return y;
}
Occasionally I found this package: https://daveskender.github.io/Stock.Indicators/docs/INDICATORS.html It targets to the latest .NET framework and has very detailed documents.
Try this:
public IEnumerable<double> EMA(IEnumerable<double> items, int notationalAge)
{
double k = 2.0d / (notationalAge + 1), prev = 0.0d;
var e = items.GetEnumerator();
if (!e.MoveNext()) yield break;
yield return prev = e.Current;
while(e.MoveNext())
{
yield return prev = (k * e.Current) + (1 - k) * prev;
}
}
It will still work with arrays, but also List, Queue, Stack, IReadOnlyCollection, etc.
Although it's not explicitly stated I also get the sense this is working with money, in which case it really ought to use decimal instead of double.
I am trying to get more comfortable with the math behind fractal coloring and understanding the coloring algorithms much better. I am the following paper:
http://jussiharkonen.com/files/on_fractal_coloring_techniques%28lo-res%29.pdf
The paper gives specific parameters to each of the functions, however when I use the same, my results are not quite right. I have no idea what could be going on though.
I am using the iteration count coloring algorithm to start and using the following julia set:
c = 0.5 + 0.25i and p = 2
with the coloring algorithm:
The coloring function simply returns the number of
elements in the truncated orbit divided by 20
And the palette function:
I(u) = k(u − u0),
where k = 2.5 and u0 = 0, was used.
And with a palette being white at 0 and 1, and interpolating to black in-between.
and following this algorithm:
Set z0 to correspond to the position of the pixel in the complex plane.
Calculate the truncated orbit by iterating the formula zn = f(zn−1) starting
from z0 until either
• |zn| > M, or
• n = Nmax,
where Nmax is the maximum number of iterations.
Using the coloring and color index functions, map the resulting truncated
orbit to a color index value.
Determine an RGB color of the pixel by using the palette function
Using this my code looks like the following:
float izoom = pow(1.001, zoom );
vec2 z = focusPoint + (uv * 4.0 - 2.0) * 1.0 / izoom;
vec2 c = vec2(0.5f, 0.25f) ;
const float B = 2.0;
float l;
for( int i=0; i<100; i++ )
{
z = vec2( z.x*z.x - z.y*z.y, 2.0*z.x*z.y ) + c;
if( length(z)>10.0) break;
l++;
}
float ind = basicindex(l);
vec4 col = color(ind);
and have the following index and coloring functions:
float basicindex(float val){
return val / 20.0;
}
vec4 color(float index){
float r = 2.5 * index;
float g = r;
float b = g;
vec3 v = 0.5 - 0.5 * sin(3.14/2.0 + 3.14 * vec3(r, g, b));
return vec4(1.0 - v, 1.0) ;
}
The paper provides the following image:
https://imgur.com/YIZMhaa
While my code produces:
https://imgur.com/OrxdMsN
I get the correct results by using k = 1.0 instead of 2.5, however I would prefer to understand why my results are incorrect. When extending this to the smooth coloring algorithms, my results are still incorrect so I would like to figure this out first.
Let me know if this isn't the correct place for this kind of question and I can move it to the math stack exchange. I wasn't sure which place was more appropriate.
Your image is perfectly implemented for Figure 3.3 in the paper. The other image you posted uses a different routine.
Your figure seems to have that bit of perspective code there at top, but remove that and they should be the same.
If your objection is the color extremes you set that with the "0.5 - 0.5 * ..." part of your code. This makes the darkest black originally 0.5 when in the example image you're trying to duplicate the darkest black should be 1 and the lightest white should be 0.
You're making the whiteness equal to the distance from 0.5
If you ignore the fractal all together you are getting a bunch of values that can be normalized between 0 and 1 and you're coloring those in some particular ways. Clearly the image you are duplicating is linear between 0 and 1 so putting black as 0.5 cannot be correct.
o = {
length : 500,
width : 500,
c : [.5, .25], // c = x + iy will be [x, y]
maxIterate : 100,
canvas : null
}
function point(pos, color){
var c = 255 - Math.round((1 + Math.log(color)/Math.log(o.maxIterate)) * 255);
c = c.toString(16);
if (c.length == 1) c = '0'+c;
o.canvas.fillStyle="#"+c+c+c;
o.canvas.fillRect(pos[0], pos[1], 1, 1);
}
function conversion(x, y, R){
var m = R / o.width;
var x1 = m * (2 * x - o.width);
var y2 = m * (o.width - 2 * y);
return [x1, y2];
}
function f(z, c){
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var R = (1 + Math.sqrt(1+4*abs(o.c))) / 2,
z, x, y, i;
o.canvas = document.getElementById('a').getContext("2d");
for (x = 0; x < o.width; x++){
for (y = 0; y < o.length; y++){
i = 0;
z = conversion(x, y, R);
while (i < o.maxIterate && abs(z) < R){
z = f(z, o.c);
if (abs(z) > R) break;
i++;
}
if (i) point([x, y], i / o.maxIterate);
}
}
}
init();
<canvas id="a" width="500" height="500"></canvas>
via: http://jsfiddle.net/3fnB6/29/
I have a robotic arm composed of 2 servo motors. I am trying to calculate inverse kinematics such that the arm is positioned in the middle of a canvas and can move to all possible points in both directions (left and right). This is an image of the system Image. The first servo moves 0-180 (Anti-clockwise). The second servo moves 0-180 (clockwise).
Here is my code:
int L1 = 170;
int L2 = 230;
Vector shoulderV;
Vector targetV;
shoulderV = new Vector(0,0);
targetV = new Vector(0,400);
Vector difference = Vector.Subtract(targetV, shoulderV);
double L3 = difference.Length;
if (L3 > 400) { L3 = 400; }
if (L3 < 170) { L3 = 170; }
// a + b is the equivelant of the shoulder angle
double a = Math.Acos((L1 * L1 + L3 * L3 - L2 * L2) / (2 * L1 * L3));
double b = Math.Atan(difference.Y / difference.X);
// S1 is the shoulder angle
double S1 = a + b;
// S2 is the elbow angle
double S2 = Math.Acos((L1 * L1 + L2 * L2 - L3 * L3) / (2 * L1 * L2));
int shoulderAngle = Convert.ToInt16(Math.Round(S1 * 180 / Math.PI));
if (shoulderAngle < 0) { shoulderAngle = 180 - shoulderAngle; }
if (shoulderAngle > 180) { shoulderAngle = 180; }
int elbowAngle = Convert.ToInt16(Math.Round(S2 * 180 / Math.PI));
elbowAngle = 180 - elbowAngle;
Initially, when the system is first started, the arm is straightened with shoulder=90, elbow =0.
When I give positive x values I get correct results in the left side of the canvas. However, I want the arm to move in the right side as well. I do not get correct values when I enter negatives. What am I doing wrong? Do I need an extra servo to reach points in the right side?
Sorry if the explanation is not good. English is not my first language.
I suspect that you are losing a sign when you are using Math.Atan(). I don't know what programming language or environment this is, but try and see if you have something like this:
Instead of this line:
double b = Math.Atan(difference.Y / difference.X);
Use something like this:
double b = Math.Atan2(difference.Y, difference.X);
When difference.Y and difference.X have the same sign, dividing them results in a positive value. That prevents you from differentiating between the cases when they are both positive and both negative. In that case, you cannot differentiate between 30 and 210 degrees, for example.
Ok, so I have a histogram (represented by an array of ints), and I'm looking for the best way to find local maxima and minima. Each histogram should have 3 peaks, one of them (the first one) probably much higher than the others.
I want to do several things:
Find the first "valley" following the first peak (in order to get rid of the first peak altogether in the picture)
Find the optimum "valley" value in between the remaining two peaks to separate the picture
I already know how to do step 2 by implementing a variant of Otsu.
But I'm struggling with step 1
In case the valley in between the two remaining peaks is not low enough, I'd like to give a warning.
Also, the image is quite clean with little noise to account for
What would be the brute-force algorithms to do steps 1 and 3? I could find a way to implement Otsu, but the brute-force is escaping me, math-wise. As it turns out, there is more documentation on doing methods like otsu, and less on simply finding peaks and valleys. I am not looking for anything more than whatever gets the job done (i.e. it's a temporary solution, just has to be implementable in a reasonable timeframe, until I can spend more time on it)
I am doing all this in c#
Any help on which steps to take would be appreciated!
Thank you so much!
EDIT: some more data:
most histogram are likely to be like the first one, with the first peak representing background.
Use peakiness-test. It's a method to find all the possible peak between two local minima, and measure the peakiness based on a formula. If the peakiness higher than a threshold, the peak is accepted.
Source: UCF CV CAP5415 lecture 9 slides
Below is my code:
public static List<int> PeakinessTest(int[] histogram, double peakinessThres)
{
int j=0;
List<int> valleys = new List<int> ();
//The start of the valley
int vA = histogram[j];
int P = vA;
//The end of the valley
int vB = 0;
//The width of the valley, default width is 1
int W = 1;
//The sum of the pixels between vA and vB
int N = 0;
//The measure of the peaks peakiness
double peakiness=0.0;
int peak=0;
bool l = false;
try
{
while (j < 254)
{
l = false;
vA = histogram[j];
P = vA;
W = 1;
N = vA;
int i = j + 1;
//To find the peak
while (P < histogram[i])
{
P = histogram[i];
W++;
N += histogram[i];
i++;
}
//To find the border of the valley other side
peak = i - 1;
vB = histogram[i];
N += histogram[i];
i++;
W++;
l = true;
while (vB >= histogram[i])
{
vB = histogram[i];
W++;
N += histogram[i];
i++;
}
//Calculate peakiness
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres & !valleys.Contains(j))
{
//peaks.Add(peak);
valleys.Add(j);
valleys.Add(i - 1);
}
j = i - 1;
}
}
catch (Exception)
{
if (l)
{
vB = histogram[255];
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres)
valleys.Add(255);
//peaks.Add(255);
return valleys;
}
}
//if(!valleys.Contains(255))
// valleys.Add(255);
return valleys;
}