Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...
Related
I have an SVG <path> with points in "model" coordinate system. For simplicity let my path consist of x, sin(x) pairs - note the lack of any scaling and offsets.
To render it on screen I calculated a SVGMatrix and put it into SVGTransformList of my path element. Also I use CSS vector-effect: non-scaling-stroke.
Now I want to pan my sine chart using a mouse, so I got the shift vector in SVG screen coordinates.
My idea is to put one more matrix in my SVGTransformList and calculate it from the screen shift vector.
Should I put this new matrix before or after my original matrix? What is considered a good style? (I know that the coefficients of the second matrix will be different in the two cases)
Also to transform my shift vector to model coordinates I transform back two SVGPoints: zero and with coordinates of my delta vector, and manually subtract the images coordinate-wise. Is it the way to transform vectors, e.g. there are
no better math or API approach?
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)
I need to warp imaginary rectangle lying on the image.
So I think I need:
Detect which pixels of images belong to rectangle (something like rasterization?).
Do warp of pixels and somehow do interpolation in rectangle (I don't know how) between pixels.
How to deal with border pixels of belonging to different rectangles?
Generally I trying to do something like this
For warping the images, the following procedure can be applied.
Assuming that you have the displacements of each of the points on the lattice, you need to do a B-Spline interpolation(based on the displacements of the neighboring lattice points) to deform the source image.
For obtaining the optimal displacement of each lattice point, you can use a label set corresponding to the displacement of the lattice point in x-y direction and compute SSD between patches in the source and the target image for different labels. For a smooth solution, a regularization prior needs to be added, so that neighboring points on the lattice have a similar displacement. This joint optimization problem can be solved using MRFs.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.