How to convert from one co-ordinate system to another (graphics) - graphics

I've been having issues with this for a little while now. I feel like I should know this but I can't for the life of me remember.
How can I map the screen pixels to their respective 'graphical' x,y positions? The co-ordinate systems have been configured to start at the bottom left (0,0) and increase to the top-right.
I want to be able to zoom, so I know that I need to configure the zoom distance into the answer.
Screen
|\ Some Quad
| \--------|\Qx
| \ Z | \
| \ \|Qy
\ |
Sx\ |Sy
\|
I want to know which pixels on my screen will have the quad on it. Obviously as Z decreases, the quad will occupy more of the screen, and as Z increases it will occupy less, but how exactly are these calculated?
For further clarification, I want to know how I can map these screen pixels onto the 'graphical' co-ordinates using the zoom factor into the equation.
Thanks for any help.

Use the zoom factor as a multiplier against the coordinates and/or screen size.
For example, if you have a 100x150 pixel square, when zoomed in to 150%, the size of the rectangle should be 150x225.
An equation for this is:
h = height
w = width
z = percent zoom
(100% = 1.00)
new width = W = wz
new height = H = hz
To map screen pixels, apply more basic mathematical principles. The relative coordinates depend entirely on the center of the zoom. This is very easy, if everything zooms in the exact center. If zooming from elsewhere (e.g. stretching the object from a corner or a non-central coordinate), you must apply an offset to your equation.
Zooming a rectangle from its center point is easy. Divide the difference in rectangle width by 2, and then add it to the left and right coordinate value (you can add a negative number). Do the same for height.
If zooming the rectangle from a coordinate that is NOT in its exact center, but is still within the bounds of the rectangle, requires an offset. Simply determine what percentage of height and width change should be applied to each side of the rectangle. Sides in closer proximity to the zoom point will receive a lower percentage of the change.
When the zoom point resides outside the rectangle, the distance from the zoom point must also be taken into account. This offset moves the entire rectangle, in addition to scaling the rectangle.
Get a large piece of paper and draw up some visualizations. That always helps. =)

If (xk, yk) is the center before zooming and the size is (Sx, Sy), zoomed to a factor of Z in (0, 1], the new size will be (Qx, Qy) = (Sx*(1-Z), Sy*(1-Z)) centered on (xk, yk) which means the screen coordinates are:
rectangle: xk - Qx/2, yk - Qy/2, xk + Qx/2, yk + Qy/2
Hope that helps.

Related

python3 reportlab drawImage High to center

c = canvas.Canvas('data.pdf', pagesize= [width*inch,height*inch])
c.drawImage('dataptah',x,y, width,height)
c.save()
I can't height center the picture,
so I need know x and y unit,
or put something.
First of all, you can use units for the x, y, width, and height values in the drawImage call, just as you did for the pagesize. Thus, if you know the aspect ratio of your image, you can calculate these values for the exact centered position.
The reference documentation mentions two other parameters, that could be helpful:
preserveAspectRation=True keeps the aspect ratio of the image even if the box specified with x, y, width, height has a different aspect ratio.
anchor='c' specifies the anchor position of the image, center in this case.
Thus, if you add these two parameters and center the box on the page, then your image should appear centered as well. Here is an example:
c.drawImage('dataptah',
width/4*inch, height/4*inch,
width/2*inch, height/2*inch,
preserveAspectRatio=True, anchor='c'
)

How to clamp the output of CIPerspectiveTransformWithExtent filter?

I'm using a CIPerspectiveTransformWithExtent filter to apply homographies (perspective warp) to images on OS X. So far, so good, and I can get the desired warping applied to my images.
I'm still struggling however with the border behavior. I would like the output of the filter to be clipped outside the original image domain:
I managed to do it for the lower and left borders by shifting the origin of the inputExtent rectangle of the correct amount. For example, if the lower left corner is projected to x = -10 then using extent.origin.x = 10 will correctly clip the left border;
on the other hand, the upper and right borders are always shown in the output image. For example, if the rightmost corner is projected to x = width + 10, setting the extent via extent.origin.x = 0, extent.size.width = width; does not work an the rightmost corner remains visible.
Am I doing anything wrong here? or maybe I'm not trying the right way to achieve my goal?

Why does the projection of an image over 3d points show this distortion?

I have a question regarding the projection of an image over a set of 3D points. The image is given to me as a JPG, together with position and attitude information of the camera relative to a cartesian coordinate system (Xc,Yc,Zc and yaw, pitch, roll), as well as the horizontal and vertical field of view (in degrees).
Points are given using solely their 3d position in the same coordinate system (Xp,Yp,Zp).
In my coordinate system, Z is up. To project the image onto the points, I
compute the vector from camera to each point
Vector3 c2p = (Xp,Yp,Zp)-(Xc,Yc,Zc);
rotate c2p according to my camera's attitude (quaternion):
Vector3 c2pCamFrame = getCamQuaternion().conjugate().rotate(c2p);
compute azimuth and elevation from the camera's "center ray" to the point:
float azimuth = atan2(c2pCamFrame.x(),c2pCamFrame.y()));
float elevation = atan2(c2pCamFrame.z(),sqrt(pow(c2pCamFrame.x(),2)+pow(c2pCamFrame.y(),2)));
if azimuth and elevation are within the field of view, I assign the color of the corresponding pixel to the point.
This works almost perfectly, and the "almost" motivates my question. Let me show you:
I cannot figure out why the elevation of the projection is distorted. In the bottom right of the image, you can see that points outside the frustum (exceeding the elevation) actually become colored - and this distortion is null at an azimuth of 0 degrees and peaks at the left and right edges of the image, creating the pillow distortion.
Why does this distortion appear? I'd love to understand this problem both in geometrical as well as mathematical terms. Thank you!
The field of view angles are only valid on the principal axes. But you can do it the other way around. I.e. calculate the x/y bounds from the angles:
maxX = tan(horizontal_fov / 2)
maxY = tan(vertical_fov / 2)
And check
if(abs(c2pCamFrame.x() / c2pCamFrame.z()) <= maxX
&& abs(c2pCamFrame.y() / c2pCamFrame.z()) <= maxY)
Additionally, you might want to check if the points are in front of the camera:
... && c2pCamFrame.z() > 0
This assumes a left-handed coordinate system.

How to find custom shape speicific area?

please , see following image, here you can see blue rectangle is custom shape bounds and custom shape is shoe , i want to find area of a portion written in image and i want that area in form of rectangle
do is there any path iterator concept ?
Note
custom shape i derived from image of the same size.
I would do it like this:
1.create table for all bounding box-rect perimeter lines
each value in it will represent the empty space length form border line to shape
something like this:
the values are found by simple image scanning until first non space color found
2.now bruteforce find the biggest rectangle area
x,y = top left corner
for xs = 1 to bounding box width
now scan the max valid height of rectangle from x to x + xs (x grows to the right)
// it should be the min y0[x..x+xs]
remember the biggest valid area/size combination
do this for all 4 combinations (star from the other corners)
I now Brute-force is slow but
you can divide perimeter lines not by pixels but with some step instead
also I am sure this can be optimized somehow
for example by derivation of perimeter find the extremes and check from them backwards
when the size will start shrinking then stop ...
of course take in mind that on complicated shapes this optimization will not work ...

Find size of inner rect of a circle

I have a circle, say radius of 10, and I can find the outer bounding rect easy enough since its width and height is equal to the radius, but what I need is the inner bounding rect. Does anyone know how to calculate the difference in size from the outer and inner bounding rectangles of a circle?
Here's an image to illustrate what I'm talking about. The red rectangle is the outer bounding box of the circle, which I know. The yellow rectangle is the inner bounding rectangle of the circle, which I need to find the difference in size from the outer rectangle.
My first guess to find the difference is to find one of the four points of the inner rectangle by finding that point along the circumference of the circle, each point being at a 45 degree offsets, and then just find the different from that point and the related point in the larger rect.
EDIT: Based off of the solution given by Steve B. I've come up with the algorithm to get what I want which is the following:
r*2 - sqrt(2)*r
If the radius is r, the outer rectangle size will be r*2.
The inner rectangle will have size equals to 2*sqrt(2*r).
So the diff will be equals to 2*(r-sqrt(2*r^2)).
You know the size of the radius and you have a triangle with a corner of 90 degrees with one point as the center of your circle and another two as two corners of your inner square.
Now if you know two sides of a triangle you can use Pythagoras:
x^2 = a^2 + b^2
= 2* r^2
So
x = sqrt(2 * r^2)
With r the radius of the circle, x the side of the square.
It's simple geometry: Outer rectangle has length of edge equal to 2*R, inner - diagonal equal to 2*R. So the edge of inner rectangle is equal to sqrt(2)*R. The ratio of edges of outer rectangle divided by inner is obviously sqrt(2).

Resources