I used OpenCV's camera calibration function for calibrating my camera. I captured around 50 images with different angles and pattern near image borders.
Cx and Cy value in intrinsic matrix is around 300 px off. Is it alright? My average reprojection error is around 0.08 though.
Cx and Cy are the coordinates (in pixels) of the principal point in your image. Usually a good approximation is (image_width/2, image_height/2).
An average reprojection error of 0.08 pixel seems quite good.
Related
Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.
So ... what exactly are the parameters of body.rotation and body.angularVelocity in Phaser arcade physics?
The documentation for body.rotation just says "the amount the Body is rotated", without specifying units (radians or degrees), the zero vector (X axis?), nor the direction that's positive.
Docs for body.angle says "angle in radians" ... but again doesn't say which axis is the 0 rotation vector, nor which direction is positive.
The documentation for angularVelocity says "angular velocity in pixels per second squared" which doesn't make ANY SENSE AT ALL. You can't measure rotation in pixels.
I'm trying to sync up a phaser front-end with a server-based physics model that has its own coordinate system, so some clarity on the documentation would really make my life easier!
As far as I know "body.rotation" is given in radians and if using degrees you should use "body.angle".
For the rotation direction a higher value rotates the sprite clockwise. If the angle is 0 and the sprite is pointing up it will point to the right after entering the body.angle = 90.
angularVelocity is not for rotating your sprite. The name says "angularVELOCITY" so what it's used for is to set an angular velocity. It's mainly used when you want the sprite to move in the direction it's facing.
The picture that I need to use is 240mm x 240mm, with 120 pixels in each direction. Apparently this resolution is somehow good enough for an-in-depth MRI brain scan, so my college lab is asking me to calculate it's resolution. I know that the formula for resolution is meters/number of pixels, but I calculated 2mm for the resolution, and I do not think that is correct. What am I doing wrong?
edit:
The question asks what the image resolution in pixel size is.
You can calculate it as follows:
120 pixels per 240 mm is 0.5 pixel per mm. (120/240)
Resolution is defined as dots per inch (DPI).
0.5 pixel per mm is 12.7 pixels per inch (0.5 x 25.4 mm)
So your resolution is 12.7 DPI.
Not much but apparently good enough for MRI.
This is based on the question I asked here, but I think I might have asked the question in the wrong way. This is my problem:
I am writing a scientific ray tracer. I.e. not for graphics although the concepts are identical.
I am firing rays from a horizontal plane toward a parabolic dish with a focus distance of 100m (and perfect specular reflection). I have a Target at the focal point of the dish. The rays are not fired perpendicularly from the plane but are perturbed by a certain angle to emulate the fact that the sun is not a point source but a disc in the sky.
However, the flux coming form the sun is not radially constant across the sun disc. Its hotter in the middle than at the edges. If you have ever looked at the sun on a hazy day you'll see a ring around the sun.
Because of the parabolic dish, the reflected image on the Target should be the image of the sun. i.e. It should be brighter (hotter, more flux) in the middle than at the edges. This is given by a graph with Intensity Vs. Radial distance from the center
There is two ways I can simulate this.
Firstly: Uniform Sampling: Each rays is shot out from the with a equal (uniform) probability of taking an angle between zero and the size of the sun disk. I then scale the flux carried by the ray according to the corresponding flux value at that angle.
Secondly: Arbitrarily Sampling: Each rays is shot out from the plane according to the distribution of the Intensity Vs. Radial Distance. Therefore there will be less rays toward the outer edges than rays within the centre. This, to me seems far more efficient. But I can not get it to work. Any suggenstions?
This is what I have done:
Uniformly
phi = 2*pi*X_1
alpha = arccos (1-(1-cos(theta))*X_2)
x = sin(alpha)*cos(phi)
y = sin(alpha)*sin*phi
z = -cos(alpha)
Where X is a uniform random number and theta is a the subtend angle of the Solar Disk.
Arbitarily Sampling
alpha = arccos (1-(1-cos(theta)) B1)
Where B is a random number generated from an arbiatry distribution using the algorithm on pg 27 here.
I am desperate to sort this out.
your function drops to zero and since the sun is not a smooth surfaced object, that is probably wrong. Chances are there are photons emitting at all parts of the sun in all directions.
But: what is your actual QUESTION?
You are looking for Monte-Carlo integration.
The key idea is: although you will sample less rays outside of the disc, you will weight these rays more and they will contribute to the sum with a higher importance.
While with a uniform sampling, you just sum your intensity values, with a non uniform sampling, you divide each intensity by the value of the probability distribution of the rays that are shot (e.g., for a uniform distribution, this value is a constant and doesn't change anything).
I'm working on a small webapp in which I need to rotate shapes. I
would like to achieve this by grabbing a point on a circle and
dragging it around to rotate the image.
Here's a quick illustration to help explain things:
My main circle can be dragged anywhere on the canvas. I know it's
radius (r) and where 12 o'clock (p0) will always be (cx, cy - r). What
I need to know is what degree p1 will be (0-360º) so I can rotate the
contents of the main circle accordingly with Raphael.rotate().
I've run through a bunch of different JavaScript formulations to find this (example), but none seem to give me values between 0-360 and my basic math skills
are woefully deficient.
The Color Picker demo (sliding the cursor along the ring on the right) has the behavior I want, but even after poring over the source code I can't seem to replicate it accurately.
Anything to point me in the correct direction would be appreciated.
// Angle between the center of the circle and p1,
// measured in degrees counter-clockwise from the positive X axis (horizontal)
( Math.atan2(p1.y-cy,p1.x-cx) * 180/Math.PI + 360 ) % 360
The angle between the center of the circle and p0 will always be +90°. See Math.atan2 for more details.