Blender: Using the edges of a mesh to control the extrusion of a 2d shape (Geometry Nodes) - geometry

I am having trouble trying to make a geometry node script in Blender. The purpose of the script is to have a simple vertical mesh control the extent on the z axis of a 2d shape mesh. Thas is, each edge of this vertical mesh will spawn an extruded instance of the 2d shape mesh, having the same z extent.
For instance: if the height adjustment mesh has just one edge with a z value from 0 to 2, the resulting mesh will be the 2d shape mesh placed at z = 0, and extended 2 up.
The image below show the result I am aiming for.
The 2d shape mesh on the left.
The height adjustment mesh with 5 edges (highlighted in yellow).
The resulting mesh (bevel applied to show its 5 mesh parts vertically).
I am fairly new to geometry nodes, but having played around with this script for days, I am still at a loss how to solve it.
My main concern is how do I get the length of an edge for each run of the Instance on Points node?
I know of the Edge Vertices node, but no matter what I do, I cant get it to output anything useful to the Extrude Mesh node. Would really like to see if this script is at all possible.
The link below shows how far I have come with the script. It now places the 2d shape at the z value of the lowest vertex of each edge. And runs the Instance on Points node once for each edge. All that is missing is how to get the edge length connected in the rigth place.
The Height Adjustment script so far

Related

Live camera to shape distance calculation based on computer vision

The goal is to live detect the walls and export the distacne to wall .There is a setup , A closed 4 wall , one set of unique & ideal shape in each wall ( Triangle , Square .....) A robot with camera will roam inside the walls and have computer vision. Robot should detect the shape and export the distance between camera and wall( or that shape ).
I have implemented this goal by Opencv and the shape detection ( cv2.approxPolyDP ) and distance calculation ( perimeter calculation and edge counting then conversion of pixel length to real distance ).
It perfectly works in 90 degree angle , but not effective when happening in other angles.
Any better way of doing it.
Thanks
for cnt in contours[1:]:
# considering countours from 1 because from practical experience whole frame is often considered as a contour
area = cv2.contourArea(cnt)
# area of detected contour
approx = cv2.approxPolyDP(cnt, 0.02*cv2.arcLength(cnt, True), True)
#It predicts and makes pixel connected contour to a shape
x = approx.ravel()[0]
y = approx.ravel()[1]
# detected shape type label text placement
perimeter = cv2.arcLength(cnt,True)
# find perimeter
in other degrees you have the perspective view of the shapes.
you must use Geometric Transformations to neutralize perspective effect (using a known-shape object or angle of the camera).
also consider that using rectified images is highly recommended Camera Calibration.
Edit:
lets assume you have a square on the wall. when camera capture an image from non-90-degree straight-on view of the object. the square is not align and looks out of shape, this causes measurement error.
but you can use cv2.getPerspectiveTransform() .the function calculates the 3x3 matrix of a perspective transform M.
after that use warped = cv2.warpPerspective(img, M, (w,h)) and apply perspective transformation to the image. now the square (in warped image) looks like 90-degree straight-on view and your current code works well on the output image (warped image).
and excuse me for bad explanation. maybe this blog posts can help you:
4 Point OpenCV getPerspective Transform Example
Find distance from camera to object/marker using Python and OpenCV

Generating density map for tree growth rings

I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)

UVW mapping for triangle mesh with 2d jpeg texture

I recently downloaded a 3d triangle mesh (.obj format) off of Turbosquid that came with a 2D jpeg image as a texture. I plan on using this mesh in a program I am developing where I am writing my own code from scratch to parse the .obj file and then texture and render the mesh.
My program can currently handle doing this just fine in most cases but there are a couple of things off with this particular .obj file that I don't know how to handle.
1) The UV coordinates are not in the range [0,1]. 0 is still the minimum value but there seems to be no upper bound. I assume this is meant to indicate that the texture wraps around the mesh more than once, so I've decided to extract the decimal value for each coordinate and use that. So for each coordinate I'm currently doing the following:
double u = ReadInValue();
double v = ReadInValue();
u = u - (int)u;
v = v - (int)v;
So a UV coord that's [1.35, 3.29] becomes [0.35, 0.29]. The texture still looks a bit off when applied so I'm not sure if this is the right thing to be doing.
2) There is an extra W coordinate. I realize that if I was dealing with a 3D volumetric texture file, the W coordinate would function in the same way as the UV coordinates and would simply be used to look up the value in the 3rd dimension. However the texture file I am given is two dimensional. So what do I do with this extra W coordinate? Can I simply ignore it? Do I have to divide the UV coordinates by the W term (as if its a homogenous coordinate?) I'm not quite sure what to do.
1) You can't extract just a fractional part of texture coordinates and expect it to work. This will break for triangles that goes over [1,1] or below [0,0]. For example a line with vertices UVs [0,0] and [5,5] - this should mean that texture will be wrapped 5 times, but in your computations both vertices get [0,0]. GPU don't have problems with using UVs bigger than 1.0 or even negative so just use what you have.
2) We don't know what is exactly in your model, it could be done for 3D texture so it has 3D coords but since you said it came with 2D texture then I don't think it's the case. I'd suggest to use first answer, with just u,v coords and see what you get.

Fast vertex computation for near skeletal animation

I am building a 3D flower mesh through a series of extrusions (Working in Unity 4 environment). For each flower there are splines that define each branch, petals, leaves etc. Around each spline there is a shape that is being extruded along that spline. And a thickness curve that defines at each point in the spline, how thick the shape could be.
I am trying to animate this mesh using parameters (i.e. look towards sun, bend with the wind).
I have been working on this for nearly two months and my algorithm boiled down to basically reconstructing the flower 'every frame', by iterating over the shape for each spline point, transforming it into the the current space and calculating new vertex positions out of it, a version of parallel transport frames.
for each vertex:
v = p + q * e * w
where
p = point on the path
e = vertex position in the local space of shape being extruded
q = quaternion to transform into the local space of the p (by its direction towards next path point)
w = width at point p
I believe this is as small of a step as it gets to extrude the model. I need to do this once for each vertex in the model basically.
I hit the point where this is too slow for my current scene. I need to have 5-6 flowers with total around 60k vertices and I concluded that this is the bottleneck.
I can see that this is a similar problem to skeletal animation, where each joint would control a cross-section of the extruded shape. It is not a direct question but I'm wondering if someone can elaborate on whether I can steal some techniques from skeletal animation to speed up the process. In my current perspective, I just don't see how I can avoid at least one calculation per vertex, and in that case, I just choose to rebuild the mesh every frame.

How to map points in a 3D-plane into screen plane

I have given an assignment of to project a object in 3D space into a 2D plane using simple graphics in C. The question is that a cube is placed in fixed 3D space and there is camera which is placed in a position whose co-ordinates are x,y,z and the camera is looking at the origin i.e. 0,0,0. Now we have to project the cube vertex into the camera plane.
I am proceeding with the following steps
Step 1: I find the equation of the plane aX+bY+cZ+d=0 which is perpendicular to the line drawn from the camera position to the origin.
Step 2: I find the projection of each vertex of the cube to the plane which is obtained in the above step.
Now I want to map those vertex position which i got by projection in step 2 in the plane aX+bY+cZ+d=0 into my screen plane.
thanks,
I don't think that by letting the z co-ordinate equals zero will lead me to the actual mapping. So any help to figure out this.
You can do that in two simple steps:
Translate the cube's coordinates to the camera's system (using
rotation), such that the camera's own coordinates in that system are x=y=z=0 and the cube's translated z's are > 0.
Project the translated cube's coordinates onto a 2d plain by dividing its x's and y's by their respective z's (you may need to apply a constant scaling factor here for the coordinates to be reasonable for the screen, e.g. not too small and within +/-half the screen's height in pixels). This will create the perspective effect. You can now draw pixels using these divided x's and y's on the screen assuming x=y=0 is the center of it.
This is pretty much how it is done in 3d games. If you use cube vertex coordinates, then you get projections of its sides onto the screen. You may then solid-fill the resultant 2d shapes or texture-map them. But for that you'll have to first figure out which sides are not obscured by others (unless, of course, you use a technique called z-buffering). You don't need that for a simple wire-frame demo, though, just draw straight lines between the projected vertices.

Resources