I'm trying to create a shader which decides the silhouette of the sprite by giving a texture (sampler2D) as input in shader parameter and clips anything outside it's bound
And so far this is what I've been able to come up with:
shader_type canvas_item;
uniform sampler2D clip_texture;
void fragment(){
vec4 color = texture(clip_texture, UV);
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
But this approach seems to squeeze the Clip Texture to the sprite resolution
I'm guessing UV being between 0.0 - 1.0 for both has something to do with this
So how do I fix this ?
Edit: further explanation on what I'm trying to achieve
Assume an image placed on top of another image (The image being placed on top will be passed as shader parameter)
Now if we crop only the overlapping part:
Now ofc we should also be able to move the location of the image placed on top (maybe pass x-y shader parameters?) and get the same effect:
This is what I'm trying to achieve using shaders
The problem was that the 0.0 - 1.0 is the UV from icon.png, so 1.0 maps the extents of the clip texture to 64x64, shrinking it.
shader_type canvas_item;
uniform sampler2D clip_texture;
uniform vec2 clip_texture_size;
void fragment(){
vec2 texture_size = 1.0 / TEXTURE_PIXEL_SIZE;
vec2 texture_ratio = texture_size / clip_texture_size;
vec2 clip_UV = UV * texture_ratio;
vec4 color = texture(clip_texture, clip_UV);
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
For icon.png, TEXTURE_PIXEL_SIZE is equal to vec2(1.0/64.0, 1.0/64.0). The reciprocal of this is the texture size: 1.0 / TEXTURE_PIXEL_SIZE. If you divide the texture size by the clip texture size, this gives the ratio between the texture and the clip texture.
You can determine the size of a texture with textureSize() instead of using a uniform, but it is only available in GLES 3.
Here is a version with an offset uniform per your edit:
shader_type canvas_item;
uniform sampler2D clip_texture;
uniform vec2 clip_texture_size;
uniform vec2 clip_texture_offset;
void fragment(){
vec2 texture_size = (1.0 / TEXTURE_PIXEL_SIZE);
vec2 texture_ratio = texture_size / clip_texture_size;
vec2 clip_UV = UV * texture_ratio + clip_texture_offset;
vec4 color;
if (clip_UV.x > 1.0 || clip_UV.x < 0.0 || clip_UV.y > 1.0 || clip_UV.y < 0.0) {
color = vec4(0.0, 0.0, 0.0, 0.0)
}
else {
color = texture(clip_texture, clip_UV);
}
if(color.a==0.0)
COLOR=vec4(0.0, 0.0, 0.0, 0.0);
else
COLOR=texture(TEXTURE, UV);
}
Related
I have 2d geometry and corresponding X and Y coordinates. How can find the rotation angle (in degree) of geometry with respect to X or Y axis? Suggestions will always be welcomed.
X = [0.71, 1.41, 2.12, 2.83, 2.12, 2.83, 3.54, 4.24, 4.95, 5.66, 4.95, 4.24, 3.54, 2.83, 2.12, 1.41]
Y = [-0.71, 0.0, 0.71, 1.41, 2.12, 2.83, 2.12, 1.41, 0.71, 0.0, -0.71, 0.0, 0.71, 0.0, -0.71, -1.41]
I have mentioned coordinates as above. My aim is to align the geometry's principal axis to the X or Y coordinates.
In order to find a principal axis, I have applied Principal Componenet Analysis (PCA).
I created a dataframe of X and Y coordinate. Then,
First I centered the geometry considering the mean of X and Y coordinate.
data_centered1 = df_1.apply(lambda x: x-x.mean())
data_centered1
Then I separated a list of X and Y coordinate (after centering) to two different list.
Then to apply PCA, I followed these steps,
(1) compute covariance matrix
def create_covariance_matrix(data_matrix):
n = data_matrix.shape[0]
cov_matrix = (1 / (n - 1)) * np.dot(data_matrix.T, data_matrix)
return cov_matrix
cov_matrix = create_covariance_matrix(first_coord)
(2) Compute eigenvalues and eigenvectors from covariance matrix
eigenvalues, eigenvectors = np.linalg.eig(cov_matrix)
(3) find the angle of first principal component
alpha = np.degrees(np.arctan(eigenvectors[1][1]/eigenvectors[0][1]))
beta = 90-alpha
(4) visulize a plot with original geometry with principal component
plt.scatter(X1_coordinate, Y1_coordinate)
plt.plot([0, eigenvectors[0][1]], [0, eigenvectors[1][1]], marker = 'o')
plt.show()
(5) Rotate all points of geometry with the angle obtained from eigenvectors (either alpha or beta). It will rotate the geometry such that geometry's principal axis aligned with global reference system axis.
I followed these steps and it works fine!!
I need to extract the clipping norm tff.model_update_aggregator.dp_aggregator from each iteration, in order to design my own optimizer in tensorflow. I have check the clipping norm of tff.model_update_aggregator.dp_aggregator was built with the following default value.
gaussian_adaptive(
noise_multiplier: float,
clients_per_round: float,
initial_l2_norm_clip: float = 0.1,
target_unclipped_quantile: float = 0.5,
learning_rate: float = 0.2,
clipped_count_stddev: Optional[float] = None
) -> tff.aggregators.UnweightedAggregationFactory
The clipping_norm can be extracted by the the following api.
clipping_norm = tff.aggregators.PrivateQuantileEstimationProcess.no_noise(
initial_estimate=1.0,
target_quantile=0.8,
learning_rate=0.2)
In that case, I find out the clipped_count_stddev in gaussian_adaptive is None, so how can I can extract the clipping norm by tff.aggregators.PrivateQuantileEstimationProcess.no_noise
I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result.
I have tried the following steps:
Read the Image
Convert Image to Grayscale.
Apply GaussianBlur
Get the Canny edges
Draw the ellipse contour
Here is the Source code:
import cv2
target=cv2.imread('./source image.png')
targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY)
targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0)
canny=cv2.Canny(targetGaussianBlurGreyScale,30,90)
kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel)
_,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for c in contours:
if len(c) >= 50:
hull=cv2.convexHull(c)
cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2)
cv2.imshow('mask',target)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image below shows the Expected & Actual result:
Source Image:
Algorithm can be simple:
Convert RGB to HSV, split and working with a V channel.
Threshold for delete all color lines.
HoughLinesP for delete non color lines.
dilate + erosion for close holes in ellipses.
findContours + fitEllipse.
Result:
With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse".
OpenCV don't have implementation but you can find it here or here.
If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:
cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> chans;
cv::split(hsvImg, chans);
cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY);
std::vector<cv::Vec4i> linesP;
cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10);
for (auto l : linesP)
{
cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA);
}
cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4);
cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2);
}
}
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);
I am working on displaying a DICOM image. But it requires me to flip the incoming DICOM image about the x = y line. In other words, I want to rotates the image about the x = y axis with 180 degree
I have found setFlipOverOrigin() from vtkImageFlip. However, it seems not working. Could anyone suggest me a method or how to use setFlipOverOrigin() correctly if it helps.
Thanks in advance.
Try using vtkTransform class, and apply a 180 degrees rotation around the axe (1, 1, 0) => x = y = 1 ; z = 0
void vtkTransform::RotateWXYZ (double angle, double x, double y, double z );
Create a rotation matrix and concatenate it with the current
transformation according to PreMultiply or PostMultiply semantics. The
angle is in degrees, and (x,y,z) specifies the axis that the rotation
will be performed around.
vtkSmartPointer<vtkTransform> rotation = vtkSmartPointer<vtkTransform>::New();
rotation->RotateWXYZ (180, 1.0, 1.0, 0);
// rotation->setInputConnection( DicomReaderImage->GetOutputPort () ); // link your image into your pipeline
rotation->Update ();
My matrix math is a bit rusty, so I'm having some trouble figuring out the proper transformation process that I need to apply here.
I have a full-screen quad with coordinates ranging from [-1, 1] in both directions. I texture this quad with a non-square texture and then scale my modelview matrix to resize and preserve the aspect ratio. I want to also rotate the resized quad, but I'm getting stretched/distorted results.
Here's the process that I'm going through:
_gl.viewport(0, 0, _gl.viewportWidth, _gl.viewportHeight); // full-screen viewport
mat4.rotate(_modelview_matrix, degToRad(-1.0 * _desired_rotation), [0, 0, 1]); // rotate around z
mat4.scale(_modelview_matrix, [_shape.width / _gl.viewportWidth, _shape.height / _gl.viewportHeight, 1]); // scale down
Note that this is implemented in WebGL, but the process should be universal.
For simplicity's sake, this is all being done at the origin. I'm pretty sure I'm missing some relationship between the scaling down and the rotation, but I'm not sure what it is.
If I want the size of the quad to be _shape.width, _shape.height and have a rotation by an arbitrary angle, what am I missing?
Thanks!
You can use arbitrary combinations of projection and modelview. So just make your life easy: Use some projection that retains the windows aspect ratio, so that modelview coordinates don't get anisotropically distorted. Then just draw the texture onto a quad with the same edge ratio.
This is in C, but the concept should be transferrable easy enough.
typedef struct Projection {
enum{perspective, ortho} type;
union {
GLfloat fov;
GLfloat size;
};
GLfloat near;
GLfloat far;
} Projection;
Projection projection;
GLuint tex_width;
GLuint tex_height;
GLuint viewport_width;
GLuint viewport_height;
/*...*/
void display()
{
GLfloat viewport_aspect;
if(!viewport_width || !viewport_height)
return;
viewport_aspect = (float)viewport_width/(float)viewport_height;
glViewport(0, 0, viewport_width, viewport_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
switch(projection.type) {
case ortho: {
glOrtho(-0.5 * viewport_aspect * projection.size,
0.5 * viewport_aspect * projection.size,
-0.5 * projection.size,
0.5 * projection.size,
-projection.near,
projection.far );
}
case perspective: {
glFrustum( -0.5 * viewport_aspect * projection.near * projection.fov,
0.5 * viewport_aspect * projection.near * projection.fov,
-0.5 * projection.near * projection.fov,
0.5 * projection.near * projection.fov,
-projection.near,
projection.far );
}
default:
return;
}
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
{
GLfloat const T = 0.5*(float)texture_width/(float)texture_height;
GLfloat quad[4][4] = {
/* X Y, U, V */
{-T, -0.5, 0.0, 0.0},
{ T, -0.5, 1.0, 0.0},
{ T, 0.5, 1.0, 1.0},
{-T, 0.5, 0.0, 1.0},
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, (char*)&quad[6]-(char*)&quad[2], &quad[2]);
glTexCoordPointer(2, GL_FLOAT, (char*)&quad[6]-(char*)&quad[2], &quad[2]);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture_ID);
glDrawArrays(GL_QUADS, 0, 4);
}
}
I've never used webgl, so no example from me here, but you should never do aspect ratio correction in your modelview matrix. That's something for the projection matrix to do.
An example on how to do it can be found here.