i m trying to get the x and y cordinate from the layout but when geting the layout from a VisualizationViewer the returned type is not staticLayout but ObservableCachingLayout (of course it couldn't be casted to static).
Is there a way to get the staticlayout from a VisualizationViewer ?
Or a way to get the x and y from the viz ? thx
To get the x and y coordinate from the Layout, you call layout.transform(vertex). (A Layout is a Transformer from vertices to Point objects.)
StaticLayout is an instance of Layout that allows the user to easily specify the coordinates of each vertex; it's intended for the case in which you already have coordinates and don't need an algorithm to determine them.
Related
I have two 3D points (x,y,z), namely A and B and a bunch of other 3D points. Point A is at (0,0,0).
I would like to set point B to (0,0,0) so that all other points including A and B are translated and rotated in a way that is appropriate (so that A is no longer at (0,0,0)).
I know that there are some translations and rotations involved, but nothing more than that.
UPGRADE:
Point B is also constrained by three vectors: x', y', z' that represent x, y, and z axis of B's coordinate system. I think these should be somehow considered for the rotation part.
As you have given two points, one (A) at the origin and one (B) somewhere else, and you want to shift (translate) B to the origin, I don't see the necessity for any rotation.
If you don't have any other contraints, just shift all coordinates by the initial coordinates of B.
You can construct a transformation matrix as given, e.g., https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations for 2D, but if you simply translate, R' = R + T, where R' is the vector after transformation, R the vector before and T the translation vector.
For more general transformations including rotations, you have to specify the rotation angle and axis. Then, you can come up with more general transformation, see above link.
I want to know how to get the x-coordinate position or the y-coordinate position of the mouse individually on pygame.
Like just the x and just the y. I think It would use
pygame.mouse.get_pos
Pygame doesn't have an API that will get you only one coordinate, you always get both. But it returns them in a 2-tuple, so you can index to get just one value if you want to:
x = pygame.mouse.get_pos()[0]
If there's any chance you might need the y coordinate as well, it might make sense to unpack as normal anyway, and just ignore the y value in the part of the code where you don't need it:
x, y = pygame.mouse.get_pos()
# do stuff with x, ignore y
if something_rare_happens():
# do stuff with y too
It might even be clearer to do the unpacking even if you'll never use y, but that's really up to you.
I think rotating the camera and taking the photo of a scene would yield the same result with keeping the camera stable and rotating the scene in reverse way.
Assume that the original camera rotation matrix is R1. Rotating the camera means that we apply another rotation matrix R12 (so R2=R12*R1 is the new rotation matrix). Assume that X is the realworld coordinate of scene point. Rotating the scene point in the reverse way means that we apply the reverse rotation matrix R12^-1 to X (this might be wrong).
So why (R12*R1)X != R1(R12^-1*X) ?
Can anyone explain me what I'm wrong?
p.s. I'm not asking about programing as well as complexity of the two method. I just want to know
(1) the mathematical equation for the action "rotating the scene"
(2) if my assumed equation for "rotating the scene" is correct, why doesn't the mathematical equations reflect the phenomenon in the real world as I described.
Edit 1: According to Spektre's answer, when I rotate the entire scene with the rotation matrix R, then the new camera rotation matrix is
R^-1*R1
In this case, I rotate the entire scene with the rotation matrix R12^-1, then the new camera rotation matrix is
(R12^-1)^-1*R1=R12*R1
However, what if I consider that rotating the camera is equivalent to rotating the scene point X (only the scene point X, not the entire scene). At that time, the rotation matrix of the camera is still R1. But the scene point X now become X'. And the image coordinate of X' is R1*X'. What is the equation for X' ? Note that
R1*X' = R12*R1*X
Of course, you can answer that
X'=R1^-1*R12*R1*X
But I think X' should be defined only by R12 and X (R1 doesn't need to be known to form X'). That's why I ask "what is the mathematical equation for rotating the scene point". X' is the result of the action "rotating X" by some rotation matrix related to R12.
I have another example when the camera does not rotate but move. Assume that I'm taking the photo of a model who is standing right in front of me. Her position is X. My position is C. In the first case, I move to the right (of me) and take the first photo. In the second case, I don't move but the model move the left (of me) with the same steps and I take the second photo. The position of the model in the two image must be identical. This is expressed by the mathematical equation
[R1 -R1*(C+d)]*X = [R1 -R1*C]*(X-d)
In the equation above (which I checked to be true), -R1*C is the translation vector, -R1*(C+d) is the translation vector when I move to the right of me, (X-d) is the position of the model when she move to the left of me.
In the above example, X' = X-d (so X' is defined through X and my movement d). In the case of rotating the camera, what is X'?
Edit 2: Since Spektre still don't understand my question. There's a need to emphasize that in the second case, I DO NOT rotate the ENTIRE world, I ONLY rotate the point X. (If I rotate the entire world, the world coordinate of X remains the same after its world rotate. But if I rotate only X, its world coordinate will be changed to X').
Just imagine the example about taking the photo of the model. In the first case, I rotate the camera and take the first photo of her (and her boy friend standing next to her).
In the second case, I rotate ONLY the model in the reverse direction (her boy friend is stable), then I take the second photo. When I compare the two photos, the position of the model is the same (the position of her boy friend would be different).
In both case, the real world position of her boy friend are the same. But the real world position of the model is changed in the second case since I rotated her. My question is what is the real world position of the girl after I rotate her?
The answer to Title is: mathematically they are both almost the same (except inversion of all operations) but physically rotating camera means changing single matrix but to rotate scene you have to rotate all the objects in your world (can be thousands and more) which is a lot slower ...
But I assume the title and text is misleading as the real question is about linear algebra matrix equations.
Let R1,R2,R12 be square matrices of size N x N and X is vector of size N. If we ignore the vector orientation (1 x N vs N x 1) then for your convention:
R2 = R12.R1
R1 = Inverse(R12).R2
so:
R12.R1.X == R12.Inverse(R12).R2.X == R2.X
As you can see equation in your Question is wrong because you change the matrix multiplication order which is wrong because:
R1.R12 != R12.R1
If you want to know more closely why then study linear algebra.
[Edit1] simple 1x1 example
let:
R1 = 1
R12= 2
R2 = R12.R1 = 2
so rewriting your wrong equation:
R12*R1*X != R1*Inverse(R12)*X
2* 1*X != 1* 0.5*X
2*X != 0.5*X
and using the correct one
R12*R1*X == R12*Inverse(R12)*R2*X == R2*X
2* 1*X == 2* 0.5* 2*X == 2*X
2*X == 2*X == 2*X
[Edit2] simple 2D example
I see you are still confused so here an 2D example of the problem:
On the left you got rotated camera by R1 so for rendering transforming world point (x,y) into its local coordinates (x1,y1). On the right is the situation reversed so camera coordinate system is axis aligned (unit matrix) and the scene is rotated in reverse by Inverse(R1). That is how it works (where R1 is the relative matrix in this case).
Now if I try to port it to your matrix names and convention so the relative matrix is R12 and R1 is the camera :
(R1.R12).(x,y) = (x1,y1)
Inverse(R1.R12).(x1,y1) = (x,y)
I am looking for a tool similar to graphviz that can render graphs, but that will allow me to constrain just the x coordinate of each node. Then, the tool will automatically choose y coordinates to make the graph look neat.
Basically, I want to make a timeline.
Language / platform / rendering medium are not very important.
If you want a neat-looking graph a force-directed algorithm is going to be your best bet. One of the best ones is SFDP (developed by AT&T, included in graphviz) though I can't seem to find pseudocode or an easy implementation. I don't think there are any algorithms this specialized. Thankfully, it's easy to code your own. I'll present some pseudocode mostly lifted form Wikipedia, but with suitably one-dimensional modifications. I'll assume you have n vertices and the vector of x-positions is x, subscripted by x.i.
set all vertex velocities to (0,0)
set all vertex positions to (x.i, random)
while (KE > epsilon)
KE = 0
for each vertex v
force = (0,0)
for each vertex u != v
force = force + (0, coulomb(u, v).y)
if u is incident to v
force = force + (0, hooke(u, v).y)
v.velocity = (v.velocity + timestep * force) * damping
v.position = v.position + timestep * v.velocity
KE = KE + |v.velocity| ^ 2
here the .y denotes getting the y-component of the force. This ensures that the x-components of the positions of the vertices never change from what you set them to be. The epsilon parameter is to be set by you, and should be something small compared to what you expect KE (the kinetic energy) to be. Also, |v| denotes the magnitude of the vector v (all computations are of 2-vectors in the above, except the KE). Note I set the mass of all the nodes to be 1, but you can change that if you want.
The Hooke and Coulomb functions calculate the respective forces between nodes; the first is linear in distance between vertices, the second is quadratic, so there is a guaranteed equilibrium. These functions look something like
def hooke(u, v)
return -k * |u.position - v.position|
def coulomb(u, v)
return C * |u.position - v.position|
where again most computations are in vector form. C and k have real values but experiment to get the graph you want. This isn't usually necessary because the scaling factors will, in two dimensions, pretty much expand or contract the whole graph, but here the x-distances are set so to get a good-looking graph you will have to change the values a bit.
So I need to map a texture to a sphere from within a pixel/fragment shader in Cg.
What I have as "input" in every pass are the Cartesian coordinates x, y, z for the point on the sphere where I want the texture to be sampled. I then transform those coordinates into Spherical coordinates and use the angles Phi and Theta as U and V coordinates, respectively, like this:
u = atan2(y, z)
v = acos(x/sqrt(x*x + y*y + z*z))
I know that this simple mapping will produce seams at the poles of the sphere but at the moment, my problem is that the texture repeats several times across the sphere. What I want and need is that the whole texture gets wrapped around the sphere exactly once.
I've fiddled with the shader and searched around for hours but I can't find a solution. I think I need to apply some sort of scaling somewhere but where? Or maybe I'm totally on the wrong track, I'm very new to Cg and shader programming in general... Thanks for any help!
Since the results of inverse trigonometric functions are angles, they will be in [-Pi, Pi] for u and [0, Pi] for v (though you can't have searched for hours with at least basic knowledge of trigonometrics, as acquired from school). So you just have to scale them appropriately. u /= 2*Pi and v /= Pi should do, assuming you have GL_REPEAT (or the D3D equivalent) as texture coordinate wrapping mode (which your description sounds like).