I'm totally a beginner in Trigonometry so my question may seem so trivial for many of you.
If my understanding is correct, based on the trigonometry a degree is defined by dividing the circumference of a circle into 360 equals parts so that each of those parts is called a degree. Now imagine that you open a circle and roll it on the table to form a simple straight segment (as if you actually drew a segment on a piece of paper using a ruler). You would then have a straight segment divided by 360 equal parts. What would be the distance between each degree ( = each division) in terms of millimetre on that segment? The reason that I ask this question is that I was looking to a protractor as you can see in the picture below:
The bottom of this protractor is an ordinary ruler and above of that we can see the measures of the degrees from 0 to 180. When I compare visually the measures on the ruler on the bottom with the degrees measures on the top of the protractor, it seems that they are the same and each degree has a distance of 1 millimetre from the next or previous degree. Is this true? Sorry if the question seems somewhat trivial for many of you but I'm completely a beginner in the field and I just try to understand how these units were actually defined.
The circumference of a circle is pi * the diameter, where pi is about 3.14159.
The diameter of your protractor looks to be about 120mm, so the circumference would be about 377 mm. Dividing by 360, each degree would be 1.05 mm -- pretty close.
That's so close that I wouldn't be surprised at all if the diameter of your protractor was actually designed to be 114.6mm, just to space the degree marks out by exactly 1mm.
Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.
So ... what exactly are the parameters of body.rotation and body.angularVelocity in Phaser arcade physics?
The documentation for body.rotation just says "the amount the Body is rotated", without specifying units (radians or degrees), the zero vector (X axis?), nor the direction that's positive.
Docs for body.angle says "angle in radians" ... but again doesn't say which axis is the 0 rotation vector, nor which direction is positive.
The documentation for angularVelocity says "angular velocity in pixels per second squared" which doesn't make ANY SENSE AT ALL. You can't measure rotation in pixels.
I'm trying to sync up a phaser front-end with a server-based physics model that has its own coordinate system, so some clarity on the documentation would really make my life easier!
As far as I know "body.rotation" is given in radians and if using degrees you should use "body.angle".
For the rotation direction a higher value rotates the sprite clockwise. If the angle is 0 and the sprite is pointing up it will point to the right after entering the body.angle = 90.
angularVelocity is not for rotating your sprite. The name says "angularVELOCITY" so what it's used for is to set an angular velocity. It's mainly used when you want the sprite to move in the direction it's facing.
This is based on the question I asked here, but I think I might have asked the question in the wrong way. This is my problem:
I am writing a scientific ray tracer. I.e. not for graphics although the concepts are identical.
I am firing rays from a horizontal plane toward a parabolic dish with a focus distance of 100m (and perfect specular reflection). I have a Target at the focal point of the dish. The rays are not fired perpendicularly from the plane but are perturbed by a certain angle to emulate the fact that the sun is not a point source but a disc in the sky.
However, the flux coming form the sun is not radially constant across the sun disc. Its hotter in the middle than at the edges. If you have ever looked at the sun on a hazy day you'll see a ring around the sun.
Because of the parabolic dish, the reflected image on the Target should be the image of the sun. i.e. It should be brighter (hotter, more flux) in the middle than at the edges. This is given by a graph with Intensity Vs. Radial distance from the center
There is two ways I can simulate this.
Firstly: Uniform Sampling: Each rays is shot out from the with a equal (uniform) probability of taking an angle between zero and the size of the sun disk. I then scale the flux carried by the ray according to the corresponding flux value at that angle.
Secondly: Arbitrarily Sampling: Each rays is shot out from the plane according to the distribution of the Intensity Vs. Radial Distance. Therefore there will be less rays toward the outer edges than rays within the centre. This, to me seems far more efficient. But I can not get it to work. Any suggenstions?
This is what I have done:
Uniformly
phi = 2*pi*X_1
alpha = arccos (1-(1-cos(theta))*X_2)
x = sin(alpha)*cos(phi)
y = sin(alpha)*sin*phi
z = -cos(alpha)
Where X is a uniform random number and theta is a the subtend angle of the Solar Disk.
Arbitarily Sampling
alpha = arccos (1-(1-cos(theta)) B1)
Where B is a random number generated from an arbiatry distribution using the algorithm on pg 27 here.
I am desperate to sort this out.
your function drops to zero and since the sun is not a smooth surfaced object, that is probably wrong. Chances are there are photons emitting at all parts of the sun in all directions.
But: what is your actual QUESTION?
You are looking for Monte-Carlo integration.
The key idea is: although you will sample less rays outside of the disc, you will weight these rays more and they will contribute to the sum with a higher importance.
While with a uniform sampling, you just sum your intensity values, with a non uniform sampling, you divide each intensity by the value of the probability distribution of the rays that are shot (e.g., for a uniform distribution, this value is a constant and doesn't change anything).
Is there any use of Sin(720)or Cos(1440) (angles in degrees)?
Whether in computer programming or in any other situation?
In general, is there any use of Sin/Cosine/Tan of any angle
greater than 360?
In Physics we do use dot products and cross products
a lot, but even they require angles less than 180 degrees
always.
Hi All,
I know how to compute them....
I want to know, if they are ever useful????
When will I ever encounter a situation, when
I need to compute Sin(440) for example???
Both in math and programming:
Sin(x) = Sin(x % 360)
As another answer pointed out, angles greater than 360 represent one or more full rotations over a circle plus the modulo part. This could have a physical meaning in some circumstances.
Also, when doing trigonometric calculations, you should take this fact into consideration. For example:
sin(a)*cos(a) = (1/2)*sin(2a)
For a>180 you will get the sin of an angle greater than 360.
By the way, have a look here.
I have seen such things come up when doing angle arithmetic:
float angleOne = 150;
float angleTwo = 250;
//...
float result = Sin(angleOne + angleTwo); // Sin(400)
float result = Sin(angleOne - angleTwo); // Sin(-100)
In this (contrived) example, it seems obvious, but when you are computing an angle based on arbitrary rotations of several objects, you can't always know what kind of numbers you would be getting. Imagine calculating the poisition of the player in a 3D game while he is standing on top of a spinning platform, for example.
Any time you're dealing with a user interaction technique, it's entirely possible that they'll push you past 0 degrees or 360 degrees. Imagine that you're making a game with a gun turrent. It's currently pointed at 359 degrees and the user yanks the joystick to the right: now it's pointed at 361 degrees. If you implement the angular representation wrong, all of a sudden, the gun with rapidly traverse nearly 360 degrees to the left.
I predict that the users will be ... disappointed with that bug.
There are all sorts of issues that come up with Euler angle representations of the frame of reference that are important in games, simulations and real device control. Gimbal lock is a serious problem in actual rotating device control (it was a problem with camera pan / tilt devices in my life). The "rapid rotation" bug was a very nasty issue in a small boat autopilot system once upon a time - imagine wrapping a steel cable very tightly around the wheel house (i.e., you don't want to be standing there).
There have been times where the normal math means you end up "traversing the circle" one or more times, and if you keep the math simple your angles might be greater than 360. Personally I like to normalize the angles to be 0 to 360 or -180 to 180 after such operations, but it's doesn't really matter much.
Sometimes the greater number might really represent something. To take a trivial example, imagine instructions to open a classic dial combination safe. You need to spin the dial around a couple of times, so the instructions could be:
turn(800); // Twice around plus another 20 degrees
turn(-500); // Once around the other direction plus 140 degrees
turn(40); // Dial in the last digit
In that context, taking the sin or cos would tell you something about the ultimate position of the dial, but you would lose the information about how many turns were involved.
One rotation around a circle is 360 degrees or 2pi radians.
Trigonometric functions such as sine and cosine will "wrap around" when they reach 360 degrees and act the same way as being at 0 degrees. Basically, the following occurs:
angle_in_unit_circle = angle mod 360
Also, some trigonometric functions such as tangent are not defined at certain angles, such as 90 and 270 degrees, where tangent of an angle will return a positive or negative infinity.
This "wrapping" around can be seen by representing the sine, cosine, tangent functions using an right triangle inscribed in an unit circle, and this behavior makes those functions periodic because they will repeat their patterns over and over again.
Wikipedia has an extensive article on Trigonometric functions, so that might be worth taking a look at.
Usage
In terms of use, I can't quite think of a good example off the top of my head, except, maybe perhaps to represent a location of a particle at a certain time in a polar coordinate system, where the angle θ is dependent on time t:
r(θ(t)) = t where θ(t) = t
for values of t from 0 to 720, which could then be represented in a Cartesian coordinate system as:
x(t) = r sin(θ(t)) == t sin(t)
y(t) = r cos(θ(t)) == t cos(t)
The particle will be moving in a spiral type movement, dependent on the time t. In this case, the sine and cosine of angles beyond 360 will be calculated.
(And my math is rusty, so if there are any errors in the equations above, please let me know!)
On a sine curve, Sin(720) == Sin(0) (etc), so I'd expect any decent implementations of those functions to handle degrees "greater than" 360. There's any number of reasons for arriving at an angle greater than 360 or less than 0.
Angles outside the range of "principal angles" [-180,180) are essentially aliases of each other (modulo 360 degrees) and have no physical distinction.
From a mathematical/engineering sense, if you have a process where the # of rotations is important and must be kept track of (e.g. a motor that is spinning back and forth), then 0 degrees and 720 degrees are not the same. Sine and cosine are just periodic functions so they have the same value every 360 degrees. If you have a particle undergoing uniform circular motion where x(t) = A cos (ωt + φ) and y(t) = A sin (ωt + φ), then the phase angle θ = (ωt + φ) is going to be whatever it is, whether 0 or 720 degrees or 82144.33 degrees or whatever.
So the functions cos(θ) and sin(θ) just get used to calculate the x and y coordinates, no matter what the value of θ is. It's not like you have a choice in the matter, if θ is 82144.33 degrees then you're going to want to calculate the sine and cosine of that angle.
I play a PC game called Garry's Mod, and there are moments in the game where, when programming, I want a simple solution to keep an object constantly moving in a constant circle. To do this I use the sine and cosine of a forever increasing timer, measuring the amount of time since the game launched.
The sine of T (time) is equal to the orbit paths X value, while the cosine of T is equal to the orbit paths Y value(X and Y being on a 3 dimensional coordinate plane with Z not being used at the moment.)
Example:
T=1000 ticks
X=sin(T)
Y=cos(T)
So X is 0.8268795405320025 during that moment in time and Y is 0.15466840618074712.
Now let's say the amount of time grows to 1500. X would be -0.9939019569066535 and Y = -0.11026740251372914.
In a nutshell, it would constantly fluctuate from 1 to -1, leaving me the opportunity to multiply that value by say 100, and making the coordinate plane local to my characters position, then I can tell the programmed expression to move an object based on those coordinates and it would move in a constant circular path around me.
Tada. Who says you can't learn from video games?
Because sin(x) = sin(x mod 360°) and cos(x) = cos(x mod 360°) you can use every value in calculation, but you could also normalize to the range [0°,360°) or any other range of 360°. It just depends on the usage if large angles have a well defined meaning or not.
Processors will likley normalize the calculation to just a range of 90° or even less, and derive all other values from this small range.
When will arguments greater than 360° occur?
They naturaly occur in simulations of periodic time or space dependent functions.
Your question does not make much sense seeing as you seem to know the difference here:
No - you will never have to "compute" Sin(720), anymore than you will have a need to "compute" Sin(0). You need to look at the definition of the Sinus function to fully understand what goes on under the covers - and when that is understood it makes total sense for anyone as to why Sin(0) = Sin(720) - there's nothing magical going on, there's (logically) no Angle = FullAngle % 360 going on, it's all in the definition of what the function is supposed to do.
See wikipedia
#dta, I think folks are a little confused. You ask if it's ever "useful." I'd say "It doesn't matter, because you just shift the angle to the proper range when performing the calculation." There are certainly cases where you need to know how far from 0 degrees an object has rotated, accounting for multiple rotations. But aside from those cases, it's more convenient to interpret angles in the normal 0-360 range. Most people build up an intuitive feel for which direction corresponds to angles in that normal range. What direction does 170,234 degrees point? The same as 314 degrees.
As #Chris Arguin and others said, whether sin of an angle greater than 360° (or for that matter less than -360°) is useful to you depends on whether you need the information about rotations (or fractions thereof) that is represented by the difference between angle and angle%360°.
Also, since you get the same answer, you'll save a little processing time if you call sin(angle) instead of sin(angle%360), especially if you are doing many computations in a loop.
OTOH, #Scottie T makes a good point that if it is important for someone to know where around a circle your angle points, people can generally intuit position of an angle with an absolute value of 360 or less easier than they can for larger angles.
There are many circumstances where angles outside of [0,360] are needed. I like the idea of a combination lock. Here one will often see both positive and negative angles outside of the simple [0,360] degree range.
Multiple angle formulas are often important in mathematics. Trig functions are used in places other than just triangles. They appear in a variety of places, Fourier series for example, or image compression schemes, or the solution of differential equations. Computationally, it is true that you can always use mod to reduce the range for a trig function to the default. But it is rarely true that angles will always be provided in that nominal range.
There definitely can be times when you might end up with an angle measure > 360 degrees because of some kind of calculation...but it would be identical to an infinite number of other angle measures, exactly one of which will be between 0 and 360. However, if you are coding a function, you should be able to handle this calculation yourself...not rely on the user to do the mod for you.
ie While it is true that sin(370) == sin(10), and the user could do this translation themselves, they may not want to for one reason or another (see the "bolt" example in the comments for the top rated answer), and so the function should be able to handle any value.
Angles higher than 360 degrees are also used e.g. to describe snowboard tricks:
http://en.wikipedia.org/wiki/List_of_snowboard_tricks#Spins
So you see, there are various real world example where you use higher angles to describe the rotation of an object.