Affine warp of rectangle - graphics

I need to warp imaginary rectangle lying on the image.
So I think I need:
Detect which pixels of images belong to rectangle (something like rasterization?).
Do warp of pixels and somehow do interpolation in rectangle (I don't know how) between pixels.
How to deal with border pixels of belonging to different rectangles?
Generally I trying to do something like this

For warping the images, the following procedure can be applied.
Assuming that you have the displacements of each of the points on the lattice, you need to do a B-Spline interpolation(based on the displacements of the neighboring lattice points) to deform the source image.
For obtaining the optimal displacement of each lattice point, you can use a label set corresponding to the displacement of the lattice point in x-y direction and compute SSD between patches in the source and the target image for different labels. For a smooth solution, a regularization prior needs to be added, so that neighboring points on the lattice have a similar displacement. This joint optimization problem can be solved using MRFs.

Related

skimage project an image's 3D plane to fronto-parallel view

I'm working on implementing Akush Gupta's synthetic data generation dataset (http://www.robots.ox.ac.uk/~vgg/data/scenetext/gupta16.pdf). In his work. he used a convolutional neural network to extract a point cloud from a 2-dimensional scenery image, segmented the point clouds to isolate different planes, used RANSAC to fit a 3d plane to the point cloud segments, and then warped the pixels for the segment, given the 3D plane, to a fronto-parallel view.
I'm stuck in this last part- warping my extracted 3D plane to a fronto-parallel view. I have X, Y, and Z vectors as well as a normal vector. I'm thinking what I need to do is perform some type of perspective transform or rotation that would bring all the pixels on the plane to a complete 0 Z-axis while the X and Y would remain the same. I could be wrong about this, it's been a long time since I've had any formal training in geometry or linear algebra.
It looks like skimage's Perspective Transform requires me to know the dimensions of the final segment coordinates in 2d space. It looks like AffineTransform requires me to know the rotation. All I have at this point is my X,Y,Z and normal vector and the suspicion that I may know my destination plane by just setting the Z axis to all zeros. I'm not sure if my assumption is correct but I need to be able to warp all the pixels in the segment of interest to fronto-parallel, fit a bounding box, place text inside of it, then warp the final segment back to the original perspective in 3d space.
Any help with how to think about this or implement it would be massively useful.

Algorithm to calculate and display a ribbon on a 3D triangle mesh

I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.

How can i create an image morpher inside a graphics shader?

Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...

Generating density map for tree growth rings

I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)

How to draw the heightmap onto the screen?

I'm using DirectX10 to simulate a water surface, and I'm now with a height map,which is a 2D array of the heights(y) at the points (x,z). But to draw it on the screen, I must turn it into a mesh or have a index to draw triangle topology.
But the data is too large to do it manually. Are there any methods for me to draw it on the screen. I hope it's easy to implement. If there is function included in DirectX10 which can make it, the it's the best one for me.
Create a mesh that format a grid of squares (each made of two triangles) and set all vertices y = 0. In the vertex shader sample the heightmap and add the value stored in the heightmap to the y of the vertice.
This might help you.
P.S: If the area you want it to cover is too big you should take a look at terrain LOD techniques (should work the same for water).
I'm sure you can make a mesh out of it. I doubt you can generate the heightmap for a water surface that is too large to "meshify".
Why are you looking at Diamond square. For a 512x512 heightmap all you need to do is define a set of point and then generate the triangles for it. Its really very simple.

Resources