I recently downloaded a 3d triangle mesh (.obj format) off of Turbosquid that came with a 2D jpeg image as a texture. I plan on using this mesh in a program I am developing where I am writing my own code from scratch to parse the .obj file and then texture and render the mesh.
My program can currently handle doing this just fine in most cases but there are a couple of things off with this particular .obj file that I don't know how to handle.
1) The UV coordinates are not in the range [0,1]. 0 is still the minimum value but there seems to be no upper bound. I assume this is meant to indicate that the texture wraps around the mesh more than once, so I've decided to extract the decimal value for each coordinate and use that. So for each coordinate I'm currently doing the following:
double u = ReadInValue();
double v = ReadInValue();
u = u - (int)u;
v = v - (int)v;
So a UV coord that's [1.35, 3.29] becomes [0.35, 0.29]. The texture still looks a bit off when applied so I'm not sure if this is the right thing to be doing.
2) There is an extra W coordinate. I realize that if I was dealing with a 3D volumetric texture file, the W coordinate would function in the same way as the UV coordinates and would simply be used to look up the value in the 3rd dimension. However the texture file I am given is two dimensional. So what do I do with this extra W coordinate? Can I simply ignore it? Do I have to divide the UV coordinates by the W term (as if its a homogenous coordinate?) I'm not quite sure what to do.
1) You can't extract just a fractional part of texture coordinates and expect it to work. This will break for triangles that goes over [1,1] or below [0,0]. For example a line with vertices UVs [0,0] and [5,5] - this should mean that texture will be wrapped 5 times, but in your computations both vertices get [0,0]. GPU don't have problems with using UVs bigger than 1.0 or even negative so just use what you have.
2) We don't know what is exactly in your model, it could be done for 3D texture so it has 3D coords but since you said it came with 2D texture then I don't think it's the case. I'd suggest to use first answer, with just u,v coords and see what you get.
Related
I am having trouble trying to make a geometry node script in Blender. The purpose of the script is to have a simple vertical mesh control the extent on the z axis of a 2d shape mesh. Thas is, each edge of this vertical mesh will spawn an extruded instance of the 2d shape mesh, having the same z extent.
For instance: if the height adjustment mesh has just one edge with a z value from 0 to 2, the resulting mesh will be the 2d shape mesh placed at z = 0, and extended 2 up.
The image below show the result I am aiming for.
The 2d shape mesh on the left.
The height adjustment mesh with 5 edges (highlighted in yellow).
The resulting mesh (bevel applied to show its 5 mesh parts vertically).
I am fairly new to geometry nodes, but having played around with this script for days, I am still at a loss how to solve it.
My main concern is how do I get the length of an edge for each run of the Instance on Points node?
I know of the Edge Vertices node, but no matter what I do, I cant get it to output anything useful to the Extrude Mesh node. Would really like to see if this script is at all possible.
The link below shows how far I have come with the script. It now places the 2d shape at the z value of the lowest vertex of each edge. And runs the Instance on Points node once for each edge. All that is missing is how to get the edge length connected in the rigth place.
The Height Adjustment script so far
Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...
Here is a excerpt from Peter Shirley's Fundamentals of computer graphics:
11.1.2 Texture Arrays
We will assume the two dimensions to be mapped are called u and v.
We also assume we have an nx and ny image that we use as the texture.
Somehow we need every (u,v) to have an associated color found from the
image. A fairly standard way to make texturing work for (u,v) is to
first remove the integer portion of (u,v) so that it lies in the unit
square. This has the effect of "tiling" the entire uv plane with
copies of the now-square texture. We then use one of the three
interpolation strategies to compute the image color for the
coordinates.
My question is: What are the integer portion of (u,v)? I thought u,v are 0 <= u,v <= 1.0. If there is an integer portion, shouldn't we be dividing u,v by the texture image width and height to get the normalized u,v values?
UV values can be less than 0 or greater than 1. The reason for dropping the integer portion is that UV values use the fractional part when indexing textures, where (0,0), (0,1), (1,0) and (1,1) correspond to the texture's corners. Allowing UV values to go beyond 0 and 1 is what enables the "tiling" effect to work.
For example, if you have a rectangle whose corners are indexed with the UV points (0,0), (0,2), (2,0), (2,2), and assuming the texture is set to tile the rectangle, then four copies of the texture will be drawn on that rectangle.
The meaning of a UV value's integer part depends on the wrapping mode. In OpenGL, for example, there are at least three wrapping modes:
GL_REPEAT - The integer part is ignored and has no meaning. This is what allows textures to tile when UV values go beyond 0 and 1.
GL_MIRRORED_REPEAT - The fractional part is mirrored if the integer part is odd.
GL_CLAMP_TO_EDGE - Values greater than 1 are clamped to 1, and values less than 0 are clamped to 0.
Peter O's answer is excellent. I want to add a high level point that the coordinate systems used in graphics are a convention that people just stick to as a defacto standard-- there's no law of nature here and it is arbitrary (but a decent standard thank goodness). I think one reason texture mapping is often confusing is that the arbitrariness of this stardard isn't obvious. This is that the image has a de facto coordinate system on the unit square [0,1]^2. Give me a (u,v) on the unit square and I will tell you a point in the image (for example, (0.2,0.3) is 20% to the right and 30% up from the bottom-left corner of the image). But what if you give me a (u,v) that is outside [0,1]^2 like (22.7, -13.4)? Some rule is used to make that on [0.1]^2, and the GL modes described are just various useful hacks to deal with that case.
I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)
I am building a 3D flower mesh through a series of extrusions (Working in Unity 4 environment). For each flower there are splines that define each branch, petals, leaves etc. Around each spline there is a shape that is being extruded along that spline. And a thickness curve that defines at each point in the spline, how thick the shape could be.
I am trying to animate this mesh using parameters (i.e. look towards sun, bend with the wind).
I have been working on this for nearly two months and my algorithm boiled down to basically reconstructing the flower 'every frame', by iterating over the shape for each spline point, transforming it into the the current space and calculating new vertex positions out of it, a version of parallel transport frames.
for each vertex:
v = p + q * e * w
where
p = point on the path
e = vertex position in the local space of shape being extruded
q = quaternion to transform into the local space of the p (by its direction towards next path point)
w = width at point p
I believe this is as small of a step as it gets to extrude the model. I need to do this once for each vertex in the model basically.
I hit the point where this is too slow for my current scene. I need to have 5-6 flowers with total around 60k vertices and I concluded that this is the bottleneck.
I can see that this is a similar problem to skeletal animation, where each joint would control a cross-section of the extruded shape. It is not a direct question but I'm wondering if someone can elaborate on whether I can steal some techniques from skeletal animation to speed up the process. In my current perspective, I just don't see how I can avoid at least one calculation per vertex, and in that case, I just choose to rebuild the mesh every frame.