My matrix math is a bit rusty, so I'm having some trouble figuring out the proper transformation process that I need to apply here.
I have a full-screen quad with coordinates ranging from [-1, 1] in both directions. I texture this quad with a non-square texture and then scale my modelview matrix to resize and preserve the aspect ratio. I want to also rotate the resized quad, but I'm getting stretched/distorted results.
Here's the process that I'm going through:
_gl.viewport(0, 0, _gl.viewportWidth, _gl.viewportHeight); // full-screen viewport
mat4.rotate(_modelview_matrix, degToRad(-1.0 * _desired_rotation), [0, 0, 1]); // rotate around z
mat4.scale(_modelview_matrix, [_shape.width / _gl.viewportWidth, _shape.height / _gl.viewportHeight, 1]); // scale down
Note that this is implemented in WebGL, but the process should be universal.
For simplicity's sake, this is all being done at the origin. I'm pretty sure I'm missing some relationship between the scaling down and the rotation, but I'm not sure what it is.
If I want the size of the quad to be _shape.width, _shape.height and have a rotation by an arbitrary angle, what am I missing?
Thanks!
You can use arbitrary combinations of projection and modelview. So just make your life easy: Use some projection that retains the windows aspect ratio, so that modelview coordinates don't get anisotropically distorted. Then just draw the texture onto a quad with the same edge ratio.
This is in C, but the concept should be transferrable easy enough.
typedef struct Projection {
enum{perspective, ortho} type;
union {
GLfloat fov;
GLfloat size;
};
GLfloat near;
GLfloat far;
} Projection;
Projection projection;
GLuint tex_width;
GLuint tex_height;
GLuint viewport_width;
GLuint viewport_height;
/*...*/
void display()
{
GLfloat viewport_aspect;
if(!viewport_width || !viewport_height)
return;
viewport_aspect = (float)viewport_width/(float)viewport_height;
glViewport(0, 0, viewport_width, viewport_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
switch(projection.type) {
case ortho: {
glOrtho(-0.5 * viewport_aspect * projection.size,
0.5 * viewport_aspect * projection.size,
-0.5 * projection.size,
0.5 * projection.size,
-projection.near,
projection.far );
}
case perspective: {
glFrustum( -0.5 * viewport_aspect * projection.near * projection.fov,
0.5 * viewport_aspect * projection.near * projection.fov,
-0.5 * projection.near * projection.fov,
0.5 * projection.near * projection.fov,
-projection.near,
projection.far );
}
default:
return;
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
{
GLfloat const T = 0.5*(float)texture_width/(float)texture_height;
GLfloat quad[4][4] = {
/* X Y, U, V */
{-T, -0.5, 0.0, 0.0},
{ T, -0.5, 1.0, 0.0},
{ T, 0.5, 1.0, 1.0},
{-T, 0.5, 0.0, 1.0},
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, (char*)&quad[6]-(char*)&quad[2], &quad[2]);
glTexCoordPointer(2, GL_FLOAT, (char*)&quad[6]-(char*)&quad[2], &quad[2]);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture_ID);
glDrawArrays(GL_QUADS, 0, 4);
}
}
I've never used webgl, so no example from me here, but you should never do aspect ratio correction in your modelview matrix. That's something for the projection matrix to do.
An example on how to do it can be found here.
Related
I'm trying to create a shader which decides the silhouette of the sprite by giving a texture (sampler2D) as input in shader parameter and clips anything outside it's bound
And so far this is what I've been able to come up with:
shader_type canvas_item;
uniform sampler2D clip_texture;
void fragment(){
vec4 color = texture(clip_texture, UV);
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
But this approach seems to squeeze the Clip Texture to the sprite resolution
I'm guessing UV being between 0.0 - 1.0 for both has something to do with this
So how do I fix this ?
Edit: further explanation on what I'm trying to achieve
Assume an image placed on top of another image (The image being placed on top will be passed as shader parameter)
Now if we crop only the overlapping part:
Now ofc we should also be able to move the location of the image placed on top (maybe pass x-y shader parameters?) and get the same effect:
This is what I'm trying to achieve using shaders
The problem was that the 0.0 - 1.0 is the UV from icon.png, so 1.0 maps the extents of the clip texture to 64x64, shrinking it.
shader_type canvas_item;
uniform sampler2D clip_texture;
uniform vec2 clip_texture_size;
void fragment(){
vec2 texture_size = 1.0 / TEXTURE_PIXEL_SIZE;
vec2 texture_ratio = texture_size / clip_texture_size;
vec2 clip_UV = UV * texture_ratio;
vec4 color = texture(clip_texture, clip_UV);
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
For icon.png, TEXTURE_PIXEL_SIZE is equal to vec2(1.0/64.0, 1.0/64.0). The reciprocal of this is the texture size: 1.0 / TEXTURE_PIXEL_SIZE. If you divide the texture size by the clip texture size, this gives the ratio between the texture and the clip texture.
You can determine the size of a texture with textureSize() instead of using a uniform, but it is only available in GLES 3.
Here is a version with an offset uniform per your edit:
shader_type canvas_item;
uniform sampler2D clip_texture;
uniform vec2 clip_texture_size;
uniform vec2 clip_texture_offset;
void fragment(){
vec2 texture_size = (1.0 / TEXTURE_PIXEL_SIZE);
vec2 texture_ratio = texture_size / clip_texture_size;
vec2 clip_UV = UV * texture_ratio + clip_texture_offset;
vec4 color;
if (clip_UV.x > 1.0 || clip_UV.x < 0.0 || clip_UV.y > 1.0 || clip_UV.y < 0.0) {
color = vec4(0.0, 0.0, 0.0, 0.0)
}
else {
color = texture(clip_texture, clip_UV);
}
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
I have set of points (X and Y cooridnates) of 2D geometry which is rotated at certain angle. Angle is unknown. I want to make this geometry without rotation. I do not have clue how should I make it without rotation.
Respective coordinates of rotated geometry are as below and image also shows the rotated geometry,
x = [-2.38, -1.68, -0.97, -0.26, -0.97, -0.26, 0.45, 1.15, 1.86, 2.57, 1.86, 1.15, 0.45, -0.26, -0.97, -1.68]
y = [-1.24, -0.53, 0.18, 0.88, 1.59, 2.3, 1.59, 0.88, 0.18, -0.53, -1.24, -0.53, 0.18, -0.53, -1.24, -1.94]
Geometry's centroid is placed at (0, 0).
I want to make my geometry like in the picture below,
My question:
I want to rotate my geometry so that parts' principal axes align with the axes of the coordinate system at origin. How can I do that? I could not find any way to make this geometry without rotation. Kindly give some suggestions to me and help me providing a code.
Ok, I worked out some code, this seems to be working out for me, sbNative is just there for logging purpuses, you can install it or delete the affected lines.
Btw im rotating it along the center at point 0,0. If you wish to change that do that in the function call of rotate, youll find it.
EDIT: This code is by no means optimized, it is just written very explicitly so its easy to understand, and i didnt bother speeding it up cuz its not terrible.
from sbNative.debugtools import log, cleanRepr
from numpy import arctan, degrees
import math
def rotate(origin, point, angle):
"""
https://stackoverflow.com/questions/34372480/rotate-point-about-another-point-in-degrees-python
Rotate a point counterclockwise by a given angle around a given origin.
The angle should be given in radians.
"""
qx = origin.x + math.cos(angle) * (point.x - origin.x) - math.sin(angle) * (point.y - origin.y)
qy = origin.y + math.sin(angle) * (point.x - origin.x) + math.cos(angle) * (point.y - origin.y)
return qx, qy
#cleanRepr()
class Point:
def __init__(self, *pts):
pt_names = "xyzw"
for n, v in zip(pt_names[:len(pts)], pts):
self.__setattr__(n, v)
x = [-2.38, -1.68, -0.97, -0.26, -0.97, -0.26, 0.45, 1.15, 1.86, 2.57, 1.86, 1.15, 0.45, -0.26, -0.97, -1.68]
y = [-1.24, -0.53, 0.18, 0.88, 1.59, 2.3, 1.59, 0.88, 0.18, -0.53, -1.24, -0.53, 0.18, -0.53, -1.24, -1.94]
coordinates = list(zip(x, y))
points = [Point(*coors) for coors in coordinates]
point_pairs = [(points[i], points[i + 1]) for i in range(len(points) - 1)] + \
[(points[-1], points[0])]
angles = [abs(arctan((p1.x - p2.x) / (p1.y - p2.y))) for p1, p2 in point_pairs]
angle_amounts = {}
for a in set(angles):
angle_amounts[a] = angles.count(a)
final_rotation_angle = max(angle_amounts, key=angle_amounts.get)
new_points = [rotate(Point(0, 0), p2, final_rotation_angle) for p2 in points]
log(new_points)
new_x_coors, new_y_coors = zip(*new_points)
import matplotlib.pyplot as plt
plt.scatter(new_x_coors, new_y_coors)
plt.show()
I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result.
I have tried the following steps:
Read the Image
Convert Image to Grayscale.
Apply GaussianBlur
Get the Canny edges
Draw the ellipse contour
Here is the Source code:
import cv2
target=cv2.imread('./source image.png')
targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY)
targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0)
canny=cv2.Canny(targetGaussianBlurGreyScale,30,90)
kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel)
_,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for c in contours:
if len(c) >= 50:
hull=cv2.convexHull(c)
cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2)
cv2.imshow('mask',target)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image below shows the Expected & Actual result:
Source Image:
Algorithm can be simple:
Convert RGB to HSV, split and working with a V channel.
Threshold for delete all color lines.
HoughLinesP for delete non color lines.
dilate + erosion for close holes in ellipses.
findContours + fitEllipse.
Result:
With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse".
OpenCV don't have implementation but you can find it here or here.
If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:
cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> chans;
cv::split(hsvImg, chans);
cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY);
std::vector<cv::Vec4i> linesP;
cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10);
for (auto l : linesP)
{
cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA);
}
cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4);
cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2);
}
}
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);
When I use knn algorithm in sklearn, I can get the nearest neighbors within a radius I specify i.e. it returns a circle shape of nearest neighbors within that radius. Is there an implementation where you can specify two radius values to return an ellipse shape of nearest neighbors?
You can specify a custom distance metric in NearestNeighbors:
# aspect of the axis a, b of the ellipse
aspect = b / a
dist = lambda p0, p1: math.sqrt((p1[0] - p0[0]) * (p1[0] - p0[0]) + (p1[1] - p0[1]) * (p1[1] - p0[1]) * aspect)
nn = NearestNeighbors(radius=1.0, metric=dist)
Or directly use the KDTree with a custom metric:
from sklearn.neighbors import KDTree
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
# aspect of the axis a, b of the ellipse
aspect = b / a
dist = DistanceMetric.get_metric('pyfunc', func = lambda p0, p1: math.sqrt((p1[0] - p0[0]) * (p1[0] - p0[0]) + (p1[1] - p0[1]) * (p1[1] - p0[1]) * aspect))
kdt = KDTree(X, leaf_size=30, metric=dist)
# now kdt allows queries with ellipses with aspect := b / a
kdt.query([0.1337, -0.42], k=6)
Of course you can choose to apply any affine transformation in your distance metric to get rotation and scaling for oriented ellipses.
I'm using Cairo to draw figures. I found that Cairo uses a "absolute coordinate" when drawing. It is a flexible and comfortable way, except specify the line_width. Because of the ratio of the below image is not 1:1, when the "absolute coordinate" converted to "real coordinate", the width of the lines are not same.
WIDTH = 960
HEIGHT = 640
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context(surface)
ctx.scale(WIDTH, HEIGHT)
ctx.rectangle(0, 0, 1, 1)
ctx.set_source_rgb(255, 255, 255)
ctx.fill()
ctx.set_source_rgb(0, 0, 0)
ctx.move_to(0.5, 0)
ctx.line_to(0.5, 1)
ctx.move_to(0, 0.5)
ctx.line_to(1, 0.5)
ctx.set_line_width(0.01)
ctx.stroke()
What is the correct way to make line_width shown as the same ratio in the output image?
Undo your call to ctx.scale() before calling stroke(), for example via:
ctx.save()
ctx.set_line_width(2)
ctx.identity_matrix()
ctx.restore()
(The save()/restore() pair applies all your transformations again afterwards)