cube_group.children[i].rotateOnAxis(new THREE.Vector3( 1, 0, 0 ), 180 * ( Math.PI/180 ));
I use rotateOnAxis to rotate a object, for example,rotate a object on (10,10,10) around x_axis 180 degrees, and then ,the object's position should be (10,-10,-10),but when I use object.position to get the object's position,it is (10,10,10),so how can I get the right position when I used rotateOnAxis to rotate a object?
Related
I'm rendering some text on a screen and I want to draw a box behind acting as background. Text is a surface if I know correctly, so you call get_rect() to get it's coordinates, width and height.
So when I print(my_text_surface.get_rect()) I get this:
rect(0, 0, 382, 66)>
By this information I assume I can write:
my_suface.get_rect(1)
and get it's x coordinate. But then it's says:
get_rect only accepts keyword arguments
So I'm asking you here if get_rect() can get me a list and, if yes how can I access it?
If you need my code:
font = self.pygame.font.Font("freesansbold.ttf", 64)
spadenpala_text = font.render("Spadenpala", True, (255, 255, 255))
spadenpala_text_position = spadenpala_text.get_rect(center=(self.width/2, 200))
print(spadenpala_text.get_rect([1]))
Thanks, Mirko Dolenc
pygame.Surface.get_rect() doesn't return a list, but a pygame.Rect object:
Returns a new rectangle covering the entire surface.
pygame.Surface.get_rect() returns a rectangle with the size of the Surface object, that always starts at (0, 0) since a Surface object has no position. A Surface is blit at a position on the screen. The position of the rectangle can be specified by a keyword argument. For example, the center of the rectangle can be specified with the keyword argument center. These keyword argument are applied to the attributes of the pygame.Rect before it is returned (see pygame.Rect for a full list of the keyword arguments).
A pygame.Rect object has a lot of virtual attributes like .x, .y, .width, .height etc. e.g.:
surf = pygame.Surface((100, 50))
rect = surf.get_rect(center = (200, 200))
print(rect)
print(rect.x)
print(rect.y)
print(rect.width)
print(rect.height)
output:
<rect(150, 175, 100, 50)>
150
175
100
50
I'm trying to implement a simple 3d render. I have implemented a perspective projection matrix similar to glm::perspective. I have read that a perspective projection matrix transforms vertices such that after the perspective division, visible region, i.e., objects that lie inside the specified frustum fall in the range [-1,1], but when I tested some values with a matrix returned by glm::perspective results are not similar to my understanding.
float nearPlane = 0.1f;
float farPlane = 100.0f;
auto per =
glm::perspective(45.0f, (float)(window_width * 1.0 / window_height),
nearPlane, farPlane);
// z=nearplane
print(per * glm::vec4(1, -1, -0.1, 1));
// z=farplane
print(per * glm::vec4(1, -1, -100, 1));
// z between nearplane and farplane
print(per * glm::vec4(1, -1, -5, 1));
// z beyond far plane
print(per * glm::vec4(1, -1, -200, 1));
// z behind the camera
print(per * glm::vec4(1, -1, 0.1, 1));
// z between camera and near plane
print(per * glm::vec4(1, -1, -0.09, 1));
As per my understanding, if vertices have a z coordinate which lies towards positive z-direction from nearPlane then after perspective divide z/w value should be <-1 but as shown in the picture below it is not the case. what am I missing?
as per my understanding if vertices has z coordinate which is lies towards positive z direction from nearPlane then after perspective divide z/w value should be <-1 but as shown in picture below it is not the case. what am i missing?
That's simply not the case. There is a singularity at z=0 (camera plane), and the sign of the function flips (camera is looking to the left; blue vertical line is the near clipping plane):
Those values are still outside the [-1,1] range, telling us that they are outside the camera frustum and should be clipped accordingly.
You might be confused because you're thinking that "smaller z/w values mean that the point is closer to the camera". But that's true only in-front of the camera. In fact the depth buffer values matter only if the fragment isn't clipped -- i.e. in the [-1,1] range, where that relation happens to be correct.
I am trying to draw a clock that works. I am using a 600x600 form. I cant' figure out how to place the oval in the center of the form or how to add the minutes or the seconds tick marks inside the oval. I tried dash but couldn't get it to look right. Any suggestions. Thanks in advance.
This is what I have done so far:
from tkinter import *
canvas_width = 600
canvas_height = 600
master = Tk()
w = Canvas(master, width = canvas_width, height = canvas_height)
w.pack()
oval = w.create_oval(75,75,500,500)
minline = w.create_line(150,150,300,300)
mainloop()
The center of a drawn shape is the middle of the two points specified when it is drawn. Currently, the middle point of your shape (draw from 75, 75 to 500, 500) is 237.5, so, if you want the middle of it to be the middle of your page, and keep the 75, 75 coordinate, you would have to make the other one 525, 525 to completely mirror the first.
As for drawing the shape, you'll need some math in python, so I would first suggest doing an image as the background for the clock, so that less objects are drawn. But, if you must do it without other images, you must first import the math library.
import math
Now, for a mathematic principle: Any point on the circle of radius r can be expressed as the point (r*cosθ), (r*sinθ), where θ is the angle from the center to the point. The reason this is important is that you want each line on the side of the clock face to be pointing towards the center of the circle. To do that, we need the two points to draw the line on that together point towards the center, and fortunately for us this means that both points on the line are on different circles (our circle and one within it) but are at the same angle from the center.
Since we want 12 hour points around the circle, and 4 minute points between each of those (so 60 points in total), and 360 degrees in a circle (so 1 point for every 6 degrees), we will need a for loop that goes through that.
for angle in range(0, 360, 6):
Then we'll want 3 constants: One for the radius of the exterior circle (for the points to begin from), one for an interior circle (for the minute points to add at), and one for an even more interior circle (for the hour points to end at). We'll also want it to choose the more interior radius only every 30 degrees (because it appears every 5 points, and there are 6 degrees between them).
radius_out = 225
radius_in = 0 #temporary value
if (angle % 30) == 0: #the % symbol checks for remainder
radius_in = 210
else:
radius_in = 220
Now, for the conversion into radians (As math in python needs radians for sin and cos):
radians = (angle / 180) * math.pi
Next off, assigning the coordinates to variables so it's easier to read.
x_out = (radius_out * math.cos(radians)) + 300
y_out = (radius_out * math.sin(radians)) + 300
x_in = (radius_in * math.cos(radians)) + 300
y_in = (radius_in * math.sin(radians)) + 300
#the (+ 300) moves each point from a relative center of 0,0 to 300,300
And finally we assign it to a list so we can access it later if we need to. Make sure to define this list earlier outside of the for loop.
coords.append( w.create_line(x_out, y_out, x_in, y_in) )
This should give you your clock lines.
NOTE: Due to the way tkinter assigns x and y coordinates, this will draw lines from the 3 hour line clockwise back to it.
Hope this was helpful! If there is anything you don't understand, comment it below.
I'm doing an animated transform in Raphael (and Snap.svg, which does the same).
If I apply a rotation to a basic element, it rotates normally as I would expect. However, if I already have a previous transform applied (even if its t0,0 or r0), the element seems to scale down and back up, as though it always has to fit in its previous bounding box or something.
Here is an example fiddle
var r1 = s.rect(0,0,100,100,20,20).attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('t0,0'); // any transform leads to shrink on rotate...
r1.animate({ transform: 'r90,50,50' }, 2000);
var r2 = s.rect(150,0,100,100,20,20).attr({ fill: "blue", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r2.animate({ transform: 'r90,200,50' }, 2000);
Is there something obvious I'm missing on animated transforms as to what is happening ?
There are a couple different things you need to understand to figure out what's going on here.
The first is that your animating transform is replacing your original transform, not adding on to it. If you include the original transform instruction in the animation, you avoid the shrinking effect:
var r1 = s.rect(0,0,100,100,20,20)
.attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('t0,0');
// any transform leads to shrink on rotate...
r1.animate({ transform: 't0,0r90,50,50' }, 5000);
//unless you repeat that transform in the animation instructions
http://jsfiddle.net/96D8t/3/
You can also avoid the shrinking effect if your original transformation is a rotation around the same center:
var r1 = s.rect(0,0,100,100,20,20)
.attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('r0,50,50'); // no shrinking this time...
r1.animate({ transform: 'r90,50,50' }, 2000);
http://jsfiddle.net/96D8t/4/
But why should it make a difference, seeing as a translation of 0,0 or a rotation of 0 doesn't actually change the graphic? It's a side effect of the way the program calculates in-between values when you ask it to convert between two different types of transformations.
Snap/Raphael are converting your two different transformations into matrix transformations, and then interpolating (calculating intermediate values) between each value in the matrix.
A 2D graphical transformation can be represented by a matrix of the form
a c e
b d f
(that's the standard lettering)
You can think of the two rows of the matrix as two algebra formulas for determining the final x and y value, where the first number in the row is multiplied by the original x value, the second number is multiplied by the original y value, and the third number is multiplied by a constant 1:
newX = a*oldX + c*oldY + e;
newY = b*oldX + d*oldY + f;
The matrix for a do-nothing transformation like t0,0 is
1 0 0
0 1 0
Which is actually represented internally as an object with named values, like
{a:1, c:0, e:0,
b:0, d:1, f:0}
Either way, it just says that the newX is 1 times the oldX, and the newY is 1 times the oldY.
The matrix for your r90,50,50 command is:
0 -1 100
1 0 0
I.e., if your old point is (50,100), the formulas are
newX = 0*50 -1*100 + 100*1 = 0
newY = 1*50 + 0*100 + 0 = 50
The point (50,100) gets rotated 90degrees around the point (50,50) to become (0,50), just as expected.
Where it starts getting unexpected is when you try to transform
1 0 0
0 1 0
to
0 -1 100
1 0 0
If you transform each number in the matrix from the start value to the end value, the half-way point would be
0.5 -0.5 50
0.5 0.5 0
Which works out as the matrix for scaling the rectangle down by half and rotating it 45degrees around (50,50).
All of that might be more math than you needed to know, but I hope it helps make sense of what's going on.
Regardless, the easy solution is to make sure that you always match up the types of transforms before and after the animation, so that the program can interpolate the original values, instead of the matrix values.
I'm working with kinect and ofxopeni. I have a point cloud in real world coordinates but I need rotate those points to offset the tilt of the camera. The floor plane should give me all the information I need but I can't work out how to calculate the axis and angle of rotation.
my initial idea was...
ofVec3f target_normal(0,1,0);
ofVec3f vNormal; //set this from Xn3DPlane floorPlane (not shown here)
ofVec3f ptPoint; //as above
float rot_angle = target_normal.angle(vNormal);
for(int i = 0; i < numPoints; i++){
cloudPoints[i].rotate(rot_angle, vNormal, ptPoint); //align my points to normal is (0 1 0)
}
This it seems was too simplistic by far. I've been fishing through various articles and can see that it most probably involves a quarterion or rotation matrix but I can't work out where to start. I'd be really grateful for any pointers to relevant articles or what is the best technique to get an axis and angle of rotation ? I'm imagining it can be done quite easily using ofQuarterion or an openni function but I can't work out how to implement.
best
Simon
I've never used ofxopeni, but this is the best mathematical explanation I can give.
You can rotate any set of data from one axis set to another using a TBN matrix,(tangent, bitangent, normal), where T B and N are your new set of axis. So, you already have the data for the normal, but you need to find a tangent. I'm not sure if your Xn3DPlane provides a tangent, but if it does, use that.
The bitangent is given by the cross-product of the normal and the tangent:
B = T x N
A TBN looks like this:
TBN = { Tx ,Ty ,Tz,
Bx, By, Bz,
Nx, Ny, Nz }
This will rotate your data on a new set of axis, but your plane also has an origin point, so we through in a translation:
A = {1 , 0 , 0, 0, { Tx , Ty , Tz , 0,
0, 1, 0, 0, Bx , By , Bz , 0,
0, 0, 1, 0, x Nx , Ny , Nz , 0,
-Px,-Py,-Pz,1} 0 , 0 , 0 , 1}
// The multiply the vertex, v, to change it's coordinate system.
v = v * A
If you can get things to this stage, you can transform all your points to the new coordinate system. One thing to note, the normal is now aligned with the z axis, if you want it to be aligned with the y, swap N and B in the TBN matrix.
EDIT: Final matrix calculation was slightly wrong. Fixed.
TBN calculation.