Decomposing svg transformation matrix - svg

I have a polyline(1) that is withing group that is within another group. Both of these groups are transformed in every way. I need to get the exact locations of the polyline points after the transformations but it isn't easy since there is only transform attribute at hand.
I'm trying to replicate these transformations to another polyline(2) so I would get the transformed point locations. I get the globalMatrix from the polyline(1) and with the help of this: http://svg.dabbles.info/snaptut-matrix-play and Internet, I've come to this: http://jsfiddle.net/3c4fuvfc/3/
It works if there isn't any rotation applied. If rotation is applied then all the points are a little off. Of course the scaling isn't done yet, maybe it fixes this issue?
And then there is the issue of scaling: polyline(1) is sometimes flipped around ( typically s-1,1 or s1,-1). How is scaling supposed to be implemented here?
Is it important in which order these are done when trying to replicate transformations?
Is the decomposing done right, this seems odd:
scaleX: Math.sqrt(matrix.a * matrix.a + matrix.b * matrix.b),
scaleY: Math.sqrt(matrix.c * matrix.c + matrix.d * matrix.d),
Thank you

I'm not sure if I'm misunderstanding the problem, but you seem to be doing a lot of work for information you already have.
You have the matrix ( el.matrix().globalTransform ), so why not just apply to to each point. I'm not sure what what help decomposing the matrix is giving you ?
So you could do this, iterate over the points, apply the matrix, and your polyline is flattened with the existing matrix...
var m = r1.transform().globalMatrix;
var pts = poly.attr('points');
var ptsarray = [];
for( var c = 0; c < pts.length ; c += 2 ) {
ptsarray.push( m.x( pts[c], pts[c+1] ),
m.y( pts[c], pts[c+1] ) );
}
poly.attr('points', ptsarray )
jsfiddle
Transforming a coordinate using a Snap matrix can be found here

Related

Ultimate struggle with a full 3d space controller

Sorry if i'm stupid or something, but i having a deep dread from a work on a "full 3d" space movement.
I'm trying to make a "space ship" KinematicBody controller which using basis vectors as a rotation point and have ability to strafe/move left,right,up,down based on it's facing direction.
The issue is i'm having that i want to use a Vector3 as a storage of all input variables, an input strength in particular, but i can't find a convenient way to orient or use this vector's variables to apply it to velocity.
I got a sort of cheap solution which i don't like with applying a rotation to an input vector so it will "corresponds" to one of the basis, but it's starting to brake at some angels.
Could please somebody suggest what i can change in my logic or maybe there is a way to
use quaternion/matrix related methods/formulas?
I'm not sure I fully understand what you want to do, but I can give you something to work with.
I'll assume that you already have the input as a Vector3. If not, you want to see Input.get_action_strength, Input.get_axis and Input.get_vector.
I'm also assuming that the braking situations you encountered are a case of gimbal lock. But since you are asking about applying velocity not rotation, I'll not go into that topic.
Since you are using a KinematicBody, I suppose you would be using move_and_slide or similar method, which work in global space. But you want the input to have to be based on the current orientation. Thus, you would consider your Vector3 which represents the input to be in local space. And the issue is how to go from that local space to the global space that move_and_slide et.al. need.
Transform
You might be familiar with to_local and to_global. Which would interpret the Vector3 as a position:
var global_input_vector:Vector3 = to_global(input_vector)
And the opposite operation would be:
input_vector = to_local(global_input_vector)
The problem with these is that since these consider the Vector3 to be positions, they will translate the vector depending where the KinematicBody is. We can undo that translation:
var global_vec:Vector3 = to_global(local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = to_local(global_vec + global_transform.orign)
By the way this is another way to write the same code:
var global_vec:Vector3 = (global_transform * local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = global_transform.affine_inverse() * (global_vec + global_transform.orign)
Which I'm mentioning because I want you to see the similarity with the following approach.
Basis
I would rather not consider the Vector3 to be positions. Just free vectors. So, we would transform it with only the Basis, like this:
var global_vec:Vector3 = global_transform.basis * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis * global_vec
This approach will not have the translation problem.
You can think of the Basis as a 3 by 3 matrix, and the Transform is that same matrix augmented with a translation vector (origin).
Quat
However, if you only want rotation, let us se quaternions instead:
var global_vec:Vector3 = global_transform.basis.get_rotation_quat() * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis.get_rotation_quat() * global_vec
Well, actually, let us invert just the quaternion:
local_vec = global_transform.basis.get_rotation_quat().inverse() * global_vec
These will only rotate the vector (no scaling, or any other transformation, just rotation) according to the current orientation of the KinematicBody.
Rotating a Transform
If you are trying to rotate a Transform, either…
Do this (quaternion):
transform = Transform(transform.basis * Basis(quaternion), transform.origin)
Or this (quaternion):
transform = transform * Transform(Basis(quaternion), Vector3.ZERO)
Or this (axis-angle):
transform = Transform(transform.basis.rotated(axis, angle), transform.origin)
Or this (axis-angle):
transform = transform * Transform.Identity.rotated(axis, angle)
Or this (Euler angles):
transform = Transform(transform.basis * Basis(pitch, yaw, roll), transform.origin)
Or this (Euler angles):
transform = transform * Transform(Basis(pitch, yaw, roll), Vector3.ZERO)
Avoid this:
transform = transform.rotated(axis, angle)
The reason is that this rotation is always before translation (i.e. this rotates around the global origin instead of the current position), and you will end up with an undesirable result.
And yes, you could use rotate_x, rotate_y and rotate_z, or set rotation of a Spatial. But sometimes you need to work with a Transform directly.
See also:
Godot/Gdscript rotate + translate from local to world space.
How to LERP between 2 angles going the longest route or path in Godot.

Scaling unit cube in Java3d Manually

I'm working on something in Java3D and I know TransformGroups are how you would usually apply scaling...
However, I am trying to create a way of defining cuboids based on a scaling vector.
So I have a unit cube 1,1,1 -> -1,-1,-1 and I want to manually apply a scaling transformation.
private static void scaleCoordinates(IndexedQuadArray indexedQuadArray, Vector3d scaleVector) {
//Create scalar transform
Transform3D scalarTransform = new Transform3D();
scalarTransform.setScale(scaleVector);
// retrieve the vertex coordinates
GeometryInfo indexedQuadArrayGeometryInfo = new GeometryInfo(indexedQuadArray);
Point3f coordinatesToScaleArray [] = indexedQuadArrayGeometryInfo.getCoordinates();
// scale each 3d coordinate
for(Point3f coordinate: coordinatesToScaleArray){
scalarTransform.transform(coordinate);
}
// update the indexed quad array with scaled-coordinates.
indexedQuadArray.setCoordinates(0, coordinatesToScaleArray);
}
Now it works when the scaling is positive whole numbers, but if I scale by 0.5 or a negative number the vertices get messed up.
Anyone any idea what's wrong? I should be able to scale by less than 1 I think, maybe something is happinening with Transform3D.transform(Point3f) that I'm not aware of.
Thanks a lot for reading!
I wouldn't expect negative numbers to work. The scaling transformation is really just a multiplcation when you think about it. Thus scaling the point (1,1,1) with the vector (.5,.6,.7) should give you the new point (.5,.6,.7)
If you "scale" by a negative number, you'd be turning your cube inside out, and all sorts of normals and edges would be wrong.
Can you provide a list of the vertices of in your quadArray and the scaleVector you are using?

Scaling a rotated object to fit specific rect

How can I find the scale ratio a rotated Rect element in order fit it in a bounding rectangle (unrotated) of a specific size?
Basically, I want the opposite of getBoundingClientRect, setBoundingClientRect.
First you need to get the transform applied to the element, with <svg>.getTransformToElement, together with the result of rect.getBBox() you can calculate the actual size. Width this you can calculate the scale factor to the desired size and add it to the transform of the rect. With this I mean that you should multiply actual transform matrix with a new scale-matrix.
BUT: This is a description for a case where you are interested in the AABB, means axis aligned bounding box, what the result of getBoundingClientRect delivers, for the real, rotated bounding box, so the rectangle itself in this case, you need to calculate (and apply) the scale factor from the width and/or height.
Good luck…
EDIT::
function getSVGPoint( x, y, matrix ){
var p = this._dom.createSVGPoint();
p.x = x;
p.y = y;
if( matrix ){
p = p.matrixTransform( matrix );
}
return p;
}
function getGlobalBBox( el ){
var mtr = el.getTransformToElement( this._dom );
var bbox = el.getBBox();
var points = [
getSVGPoint.call( this, bbox.x + bbox.width, bbox.y, mtr ),
getSVGPoint.call( this, bbox.x, bbox.y, mtr ),
getSVGPoint.call( this, bbox.x, bbox.y + bbox.height, mtr ),
getSVGPoint.call( this, bbox.x + bbox.width, bbox.y + bbox.height, mtr ) ];
return points;
};
with this code i one time did a similar trick... this._dom refers to a <svg> and el to an element. The second function returns an array of points, beginning at the top-right edge, going on counter clockwise arround the bbox.
EDIT:
the result of <element>.getBBox() does not include the transform that is applied to the element and I guess that the new desired size is in absolute coordinates. So the first thing you need to is to make the »BBox« global.
Than you can calculate the scaling factor for sx and sy by:
var sx = desiredWidth / globalBBoxWidth;
var sy = desiredHeight / globalBBoxHeight;
var mtrx = <svg>.createSVGMatrix();
mtrx.a = sx;
mtrx.d = sy;
Than you have to append this matrix to the transform list of your element, or concatenate it with the actual and replace it, that depends on you implementation. The most confusion part of this trick is to make sure that you calculate the scaling factors with coordinates in the same transformation (where absolute ones are convenient). After this you apply the scaling to the transform of the <element>, do not replace the whole matrix, concatenate it with the actually applied one, or append it to the transform list as new item, but make sure that you do not insert it before existing item. In case of matrix concatenation make sure to preserve the order of multiplication.
The last steps depend on your Implementation, how you handle the transforms, if you do not know which possibilities you have, take a look here and take special care for the DOMInterfaces you need to implement this.

Raphael 2 rotate and translate

Here is my script:
<script>
Raphael.fn.polyline = function(pointString) {
return this.path("M" + pointString);
};
window.onload = function() {
var paper = Raphael("holder", 500, 500);
paper.circle(100, 175, 70).attr({"stroke-width":10, "stroke":"red"});
var a = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(25, 100, 175);
var b = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(45, 100, 175);
var c = paper.polyline("92,102 96,91 104,91 108,102").attr({"fill":"green", "stroke-opacity":"0"}).rotate(65, 100, 175);
var group = paper.set();
group.push(a, b, c);
group.translate(60);
};
</script>
When I use raphael-1.5.2, the result is:
When I use raphael 2.0, the result is:
In 1.5.2 it uses the rotate transformation to rotate the objects around the circle and in 2.0 it uses the matrix transformation. I assume the matrix transformation transforms the coordinate system for that object, so when you later translate the object in the xy direction it translates it in the xy that is relative for that object.
I need to be able to add green objects around the edge of the red circle and then be able to drag and move everything in the same direction. Am I stuck using 1.5.2 or am I just missing how translate has changed in 2.0?
Use an absolute transform instead of translate. Say you want to move of 100 in x and 50 in y do this:
Element.transform("...T100,50");
Make sure you use a capital T and you'll get an absolute translation. Here's what the documentation says about it:
There are also alternative “absolute” translation, rotation and scale: T, R and S. They will not take previous transformation into account. For example, ...T100,0 will always move element 100 px horisontally, while ...t100,0 could move it vertically if there is r90 before. Just compare results of r90t100,0 and r90T100,0.
See documentation
Regarding translate, according to the documentation in Raphael JS 2.0 translate does this:
Adds translation by given amount to the list of transformations of the element.
See documentation
So what happens is it appends a relative transformation based on what was already applied to the object (it basically does "...t100,50").
I suspect that with 1 your transform correctly treats the set as one object but with 2 the little greeny things rotate indepently
Two is a complete redesign so little disconnects like this will occur
Use getBBox and find the centre of your set, then use 1 rotate command on the whole set specifying cx cy derived from getBBox

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Resources