Raphaeljs get coordinates of scaled path - svg

I have a path to create a shape - eg. an octagon
pathdetail="M50,83.33 L83.33,50 L116.66,50 L150,83.33 L150,116.66 L116.66,150 L83.33,150 L50,116.66Z";
paper.path(pathdetail);
paper.path(pathdetail).transform("S3.5");
I then use this to create the shape which I know the coordinates of each corner as they are in the pathdetail.
I then rescale it using transform("S3.5") - I need to be able to get the new coordinates of each corner in the new scaled shape - is this possible to do?

Raphael provides an utility to apply matrix transforms to paths, first you need to convert the transformation to a matrix, apply the transformation and apply it to the element:
var matrix = Raphael.toMatrix(pathdetail, "S3.5");
var newPath = Raphael.mapPath(pathdetail, matrix);
octagon.path(newPath);

If I understand correctly, you want to find the transformed coordinates of each of the eight points in your octagon -- correct? If so, Raphael does not have an out of the box solution for you, but you should be able to get the information you need relatively easily using some of Raphael's core utility functions.
My recommendation would be something like this:
var pathdetail = "your path definition here. Your path uses only absolute coordinates... right?";
var pathdetail = Raphael.transformPath( pathdetail, "your transform string" );
// pathdetail will now still be a string full of path notation, but its coordinates will be transformed appropriately
var pathparts = Raphael.parsePathString( pathdetail );
var cornerList = [];
// pathparts will not be an array of path elements, each of which will be parsed into a subarray whose elements consist of a command and 0 or more parameters.
// The following logic assumes that your path string uses ONLY ABSOLUTE COORDINATES and does
// not take relative coordinates (or H/V directives) into account. You should be able to
// code around this with only a little additional logic =)
for ( var i = 0; i < pathparts.length; i++ )
{
switch( pathparts[i][0] )
{
case "M" :
case "L" :
// Capture the point
cornerList.push( { x: pathparts[i][1], y: pathparts[i][2] } );
break;
default :
console.log("Skipping irrelevant path directive '" + pathparts[i][0] + "'" );
break;
}
}
// At this point, the array cornerList should be populated with every discrete point in your path.
This is obviously an undesirable chunk of code to use inline and will only handle a subset of paths in the wild (though it could be expanded to be suitable for general purpose use). However, for the octagon case where the path string uses absolute coordinates, this -- or something much like it -- should give you exactly what you need.

Related

deno template matching using OpenCV gives no results

I'm trying to use https://deno.land/x/opencv#v4.3.0-10 to get template matching to work in deno. I heavily based my code on the node example provided, but can't seem to work it out just yet.
By following the source code I first stumbled upon error: Uncaught (in promise) TypeError: Cannot convert "undefined" to int while calling cv.matFromImageData(imageSource).
After experimenting and searching I figured the function expects {data: Uint8ClampedArray, height: number, width: number}. This is based on this SO post and might be incorrect, hence posting it here.
The issue I'm currently faced with is that I don't seem to get proper matches from my template. Only when I set the threshold to 0.1 or lower, I get a match, but this is not correct { xStart: 0, yStart: 0, xEnd: 29, yEnd: 25 }.
I used the images provided by the templateMatching example here.
Haystack
Needle
Any input/thoughts on this are appreciated.
import { cv } from 'https://deno.land/x/opencv#v4.3.0-10/mod.ts';
export const match = (imagePath: string, templatePath: string) => {
const imageSource = Deno.readFileSync(imagePath);
const imageTemplate = Deno.readFileSync(templatePath);
const src = cv.matFromImageData({ data: imageSource, width: 640, height: 640 });
const templ = cv.matFromImageData({ data: imageTemplate, width: 29, height: 25 });
const processedImage = new cv.Mat();
const logResult = new cv.Mat();
const mask = new cv.Mat();
cv.matchTemplate(src, templ, processedImage, cv.TM_SQDIFF, mask);
cv.log(processedImage, logResult)
console.log(logResult.empty())
};
UPDATE
Using #ChristophRackwitz's answer and digging into opencv(js) docs, I managed to get close to my goal.
I decided to step down from taking multiple matches into account, and focused on a single (best) match of my needle in the haystack. Since ultimately this is my use-case anyways.
Going through the code provided in this example and comparing data with the data in my code, I figured something was off with the binary image data which I supplied to cv.matFromImageData. I solved this my properly decoding the png and passing that decoded image's bitmap to cv.matFromImageData.
I used TM_SQDIFF as suggested, and got some great results.
Haystack
Needle
Result
I achieved this in the following way.
import { cv } from 'https://deno.land/x/opencv#v4.3.0-10/mod.ts';
import { Image } from 'https://deno.land/x/imagescript#v1.2.14/mod.ts';
export type Match = false | {
x: number;
y: number;
width: number;
height: number;
center?: {
x: number;
y: number;
};
};
export const match = async (haystackPath: string, needlePath: string, drawOutput = false): Promise<Match> => {
const perfStart = performance.now()
const haystack = await Image.decode(Deno.readFileSync(haystackPath));
const needle = await Image.decode(Deno.readFileSync(needlePath));
const haystackMat = cv.matFromImageData({
data: haystack.bitmap,
width: haystack.width,
height: haystack.height,
});
const needleMat = cv.matFromImageData({
data: needle.bitmap,
width: needle.width,
height: needle.height,
});
const dest = new cv.Mat();
const mask = new cv.Mat();
cv.matchTemplate(haystackMat, needleMat, dest, cv.TM_SQDIFF, mask);
const result = cv.minMaxLoc(dest, mask);
const match: Match = {
x: result.minLoc.x,
y: result.minLoc.y,
width: needleMat.cols,
height: needleMat.rows,
};
match.center = {
x: match.x + (match.width * 0.5),
y: match.y + (match.height * 0.5),
};
if (drawOutput) {
haystack.drawBox(
match.x,
match.y,
match.width,
match.height,
Image.rgbaToColor(255, 0, 0, 255),
);
Deno.writeFileSync(`${haystackPath.replace('.png', '-result.png')}`, await haystack.encode(0));
}
const perfEnd = performance.now()
console.log(`Match took ${perfEnd - perfStart}ms`)
return match.x > 0 || match.y > 0 ? match : false;
};
ISSUE
The remaining issue is that I also get a false match when it should not match anything.
Based on what I know so far, I should be able to solve this using a threshold like so:
cv.threshold(dest, dest, 0.9, 1, cv.THRESH_BINARY);
Adding this line after matchTemplate however makes it indeed so that I no longer get false matches when I don't expect them, but I also no longer get a match when I DO expect them.
Obviously I am missing something on how to work with the cv threshold. Any advice on that?
UPDATE 2
After experimenting and reading some more I managed to get it to work with normalised values like so:
cv.matchTemplate(haystackMat, needleMat, dest, cv.TM_SQDIFF_NORMED, mask);
cv.threshold(dest, dest, 0.01, 1, cv.THRESH_BINARY);
Other than it being normalised it seems to do the trick consistently for me. However, I would still like to know why I cant get it to work without using normalised values. So any input is still appreciated. Will mark this post as solved in a few days to give people the chance to discus the topic some more while it's still relevant.
The TM_* methods of matchTemplate are treacherous. And the docs throw formulas at you that would make anyone feel dumb, because they're code, not explanation.
Consider the calculation of one correlation: one particular position of the template/"needle" on the "haystack".
All the CCORR modes will simply multiply elementwise. Your data uses white as "background", which is a "DC offset". The signal, the difference to white of anything not-white, will drown in the large "DC offset" of your data. The calculated correlation coefficients will vary mostly with the DC offset and hardly at all with the actual signal/difference.
This is what that looks like, roughly. The result of running with TM_CCOEFF_NORMED, overlaid on top of the haystack (with some padding). You're getting big fat responses for all instances of all shapes, no matter their specific shape.
You want to use differences instead. The SQDIFF modes will handle that. Squared differences are a measure of dissimilarity, i.e. a perfect match will give you 0.
Let's look at some values...
(hh, hw) = haystack.shape[:2]
(nh, nw) = needle.shape[:2]
scores = cv.matchTemplate(image=haystack, templ=needle, method=cv.TM_SQDIFF)
(sh, sw) = scores.shape # will be shaped like haystack - needle
scores = np.log10(1+scores) # any log will do
maxscore = 255**2 * (nh * nw * 3)
# maximum conceivable SQDIFF score, 3-channel data, any needle
# for a specific needle:
#maxscore = (np.maximum(needle, 255-needle, dtype=np.float32)**2).sum()
# map range linearly, from [0 .. ~8] to [1 .. 0] (white to black)
(smin, smax) = (0.0, np.log10(1+maxscore))
(omin, omax) = (1.0, 0.0)
print("mapping from", (smin, smax), "to", (omin, omax))
out = (scores - smin) / (smax - smin) * (omax - omin) + omin
You'll see gray peaks, but some are actually (close to) white while others aren't. Those are truly instances of the needle image. The other instances differ more from the needle, so they're just some reddish shapes that kinda look like the needle.
Now you can find local extrema. There are many ways to do that. You'll want to do two things: filter by absolute value (threshold) and suppress non-maxima (scores above threshold that are dominated by better nearby score). I'll just do the filtering and pretend there aren't any nearby non-maxima because the resulting peaks fall off strongly enough. If that happens to not be the case, you'd see double drawing in the picture below, boxes becoming "bold" because they're drawn twice onto adjacent pixel positions.
I'm picking a threshold of 2.0 because that represents a difference of 100, i.e. one color value in one pixel may have differed by 10 (10*10 = 100), or two values may have differed by 7 (7*7 = 49, twice makes 98), ... so it's still a very tiny, imperceptible difference. A threshold of 6 would mean a sum of squared differences of upto a million, allowing for a lot more difference.
(i,j) = (scores <= 2.0).nonzero() # threshold "empirically decided"
instances = np.transpose([j,i]) # list of (x,y) points
That's giving me 16 instances.
canvas = haystack.copy()
for pt in instances:
(j,i) = pt
score = scores[i,j]
cv.rectangle(canvas,
pt1=(pt-(1,1)).astype(int), pt2=(pt+(nw,nh)).astype(int),
color=(255,0,0), thickness=1)
cv.putText(canvas,
text=f"{score:.2f}",
org=(pt+[0,-5]).astype(int),
fontFace=cv.FONT_HERSHEY_SIMPLEX, fontScale=0.4,
color=(255,0,0), thickness=1)
That's drawing a box around each, with the logarithm of the score above it.
One simple approach to get candidates for Non-Maxima Suppression (NMS) is to cv.dilate the scores and equality-compare, to gain a mask of candidates. Those scores that are local maxima, will compare equal to themselves (the dilated array), and every surrounding score will be less. This alone will have some corner cases you will need to handle. Note: at this stage, those are local maxima of any value. You need to combine (logical and) that with a mask from thresholding the values.
NMS commonly is required to handle immediate neighbors being above the threshold, and merge them or pick the better one. You can do that by simply running connectedComponents(WithStats) and taking the blob centers. I think that's clearly better than trying to find contours.
The dilate-and-compare approach will not suppress neighbors if they have the same score. If you did the connectedComponents step, you only have non-immediate neighbors to deal with here. What to do is up to you. It's not clear cut anyway.

How can I create a 'getLengthAtPoint()' function for a curved path created via svg/raphaeljs?

I need a way to calculate the length between two points on a curved path created via Raphaeljs. For example, if I have the following path: M10,10T30,50T60,100T80,200T300,400, and I need to know the coordinates of a point that is 150 pixels from that start of the path. The pythagorean theorem cannot be used because the path is curved.
Thanks !
Just call the SVG DOM getPointAtLength method
SVGPoint getPointAtLength(in float distance)
e.g. var point = document.getElementById("pathId").getPointAtLength(150);
OK, here is a possible solution to how to find the distance from the beginning of the path to any one of the points which were used to define it. The idea is to to create a temporary sub path of the original path for each of the points which make it up, and then use 'getToalLength()' to calculate the distance of this sub-path. This distance reflects the distance in the entire original path starting from the first point up to the current point. Now, after the distance has been calculated, it can be stored, and the temporary path can be removed. This way we can calculate and store the distance from the beginning of the original path to each of the points which make it up. Here is the code I use (a bit simplified to focus just on this goal):
var pointsAry = [["M",10,10],["T",30,50],["T",60,100],["T",80,200],["T",300,400]], subPath, path = [];
for (var i = 0 ; i < pointsArray.length ; i++) {
path.push(pointsAry[i]);
subPath = paper.path(path).attr({ "stroke-opacity": 0 }); // make the path invisible
pointsAry[i].subPathSum = subPath.getTotalLength();
subPath.remove();
}
paper is created via Raphaeljs which also supplies the getTotalLength() function. Note the lines are created invisibly because their opacity is 0, and anyhow they are immediately removed.

Scaling a rotated object to fit specific rect

How can I find the scale ratio a rotated Rect element in order fit it in a bounding rectangle (unrotated) of a specific size?
Basically, I want the opposite of getBoundingClientRect, setBoundingClientRect.
First you need to get the transform applied to the element, with <svg>.getTransformToElement, together with the result of rect.getBBox() you can calculate the actual size. Width this you can calculate the scale factor to the desired size and add it to the transform of the rect. With this I mean that you should multiply actual transform matrix with a new scale-matrix.
BUT: This is a description for a case where you are interested in the AABB, means axis aligned bounding box, what the result of getBoundingClientRect delivers, for the real, rotated bounding box, so the rectangle itself in this case, you need to calculate (and apply) the scale factor from the width and/or height.
Good luck…
EDIT::
function getSVGPoint( x, y, matrix ){
var p = this._dom.createSVGPoint();
p.x = x;
p.y = y;
if( matrix ){
p = p.matrixTransform( matrix );
}
return p;
}
function getGlobalBBox( el ){
var mtr = el.getTransformToElement( this._dom );
var bbox = el.getBBox();
var points = [
getSVGPoint.call( this, bbox.x + bbox.width, bbox.y, mtr ),
getSVGPoint.call( this, bbox.x, bbox.y, mtr ),
getSVGPoint.call( this, bbox.x, bbox.y + bbox.height, mtr ),
getSVGPoint.call( this, bbox.x + bbox.width, bbox.y + bbox.height, mtr ) ];
return points;
};
with this code i one time did a similar trick... this._dom refers to a <svg> and el to an element. The second function returns an array of points, beginning at the top-right edge, going on counter clockwise arround the bbox.
EDIT:
the result of <element>.getBBox() does not include the transform that is applied to the element and I guess that the new desired size is in absolute coordinates. So the first thing you need to is to make the »BBox« global.
Than you can calculate the scaling factor for sx and sy by:
var sx = desiredWidth / globalBBoxWidth;
var sy = desiredHeight / globalBBoxHeight;
var mtrx = <svg>.createSVGMatrix();
mtrx.a = sx;
mtrx.d = sy;
Than you have to append this matrix to the transform list of your element, or concatenate it with the actual and replace it, that depends on you implementation. The most confusion part of this trick is to make sure that you calculate the scaling factors with coordinates in the same transformation (where absolute ones are convenient). After this you apply the scaling to the transform of the <element>, do not replace the whole matrix, concatenate it with the actually applied one, or append it to the transform list as new item, but make sure that you do not insert it before existing item. In case of matrix concatenation make sure to preserve the order of multiplication.
The last steps depend on your Implementation, how you handle the transforms, if you do not know which possibilities you have, take a look here and take special care for the DOMInterfaces you need to implement this.

How does inkscape calculate the coordinates for control points for "smooth edges"?

I am wondering what algorithm (or formula) Inkscape uses to calculate the control points if the nodes on a path are made "smooth".
That is, if I have a path with five nodes whose d attribute is
M 115.85065,503.57451
49.653441,399.52543
604.56143,683.48319
339.41126,615.97628
264.65997,729.11336
And I change the nodes to smooth, the d attribute is changed to
M 115.85065,503.57451
C 115.85065,503.57451 24.747417,422.50451
49.653441,399.52543 192.62243,267.61777 640.56491,558.55577
604.56143,683.48319 580.13686,768.23328 421.64047,584.07809
339.41126,615.97628 297.27039,632.32348 264.65997,729.11336
264.65997,729.11336
Obviously, Inkscape calculates the control point coordinates (second last and last coordinate pair on lines on or after C). I am interested in the algorithm Inkscape uses for it.
I have found the corresponding piece of code in Inkscape's source tree under
src/ui/tool/node.cpp, method Node::_updateAutoHandles:
void Node::_updateAutoHandles()
{
// Recompute the position of automatic handles.
// For endnodes, retract both handles. (It's only possible to create an end auto node
// through the XML editor.)
if (isEndNode()) {
_front.retract();
_back.retract();
return;
}
// Auto nodes automaticaly adjust their handles to give an appearance of smoothness,
// no matter what their surroundings are.
Geom::Point vec_next = _next()->position() - position();
Geom::Point vec_prev = _prev()->position() - position();
double len_next = vec_next.length(), len_prev = vec_prev.length();
if (len_next > 0 && len_prev > 0) {
// "dir" is an unit vector perpendicular to the bisector of the angle created
// by the previous node, this auto node and the next node.
Geom::Point dir = Geom::unit_vector((len_prev / len_next) * vec_next - vec_prev);
// Handle lengths are equal to 1/3 of the distance from the adjacent node.
_back.setRelativePos(-dir * (len_prev / 3));
_front.setRelativePos(dir * (len_next / 3));
} else {
// If any of the adjacent nodes coincides, retract both handles.
_front.retract();
_back.retract();
}
}
I'm not 100% sure of the quality of this information.
But at least at some point in time for calculating some curves
inkscape seems to have used >>spiro<<.
http://www.levien.com/spiro/
Take a quick look at the page, he's providing a link to his PhD-thesis:
http://www.levien.com/phd/thesis.pdf
in which he's introducing the theory/algorithms ...
Cheers
EDIT:
I'm currently investigating a bit into the matter for a similar purpose, so I stumbled across ...
http://www.w3.org/TR/SVG11/paths.html#PathDataCurveCommands ... the specification of curves for SVG.
So curves, like not circles or arcs, are cubic or quadratic beziers then ...
Have a look at wikipedia for bezier formulas as well:
http://en.wikipedia.org/wiki/B-spline#Uniform_quadratic_B-spline

Raphael 2 rotate and translate

Here is my script:
<script>
Raphael.fn.polyline = function(pointString) {
return this.path("M" + pointString);
};
window.onload = function() {
var paper = Raphael("holder", 500, 500);
paper.circle(100, 175, 70).attr({"stroke-width":10, "stroke":"red"});
var a = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(25, 100, 175);
var b = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(45, 100, 175);
var c = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(65, 100, 175);
var group = paper.set();
group.push(a, b, c);
group.translate(60);
};
</script>
When I use raphael-1.5.2, the result is:
When I use raphael 2.0, the result is:
In 1.5.2 it uses the rotate transformation to rotate the objects around the circle and in 2.0 it uses the matrix transformation. I assume the matrix transformation transforms the coordinate system for that object, so when you later translate the object in the xy direction it translates it in the xy that is relative for that object.
I need to be able to add green objects around the edge of the red circle and then be able to drag and move everything in the same direction. Am I stuck using 1.5.2 or am I just missing how translate has changed in 2.0?
Use an absolute transform instead of translate. Say you want to move of 100 in x and 50 in y do this:
Element.transform("...T100,50");
Make sure you use a capital T and you'll get an absolute translation. Here's what the documentation says about it:
There are also alternative “absolute” translation, rotation and scale: T, R and S. They will not take previous transformation into account. For example, ...T100,0 will always move element 100 px horisontally, while ...t100,0 could move it vertically if there is r90 before. Just compare results of r90t100,0 and r90T100,0.
See documentation
Regarding translate, according to the documentation in Raphael JS 2.0 translate does this:
Adds translation by given amount to the list of transformations of the element.
See documentation
So what happens is it appends a relative transformation based on what was already applied to the object (it basically does "...t100,50").
I suspect that with 1 your transform correctly treats the set as one object but with 2 the little greeny things rotate indepently
Two is a complete redesign so little disconnects like this will occur
Use getBBox and find the centre of your set, then use 1 rotate command on the whole set specifying cx cy derived from getBBox

Resources