The structure here is
osg::MatrixTransform
|
osg::Geode
|
several drawables
how can i get AABB bounding box from osg::MatrixTransform?
There's not direct method, as MatrixTransform only exposes a getter for the Bounding Sphere, BoundingBox is available only on Drawable class and derivatives.
With your scene graph structure you could collect all the drawables and expand a bounding box to include every drawable's BB with this method.
This will give you a BB which includes all of the others in the drawables coordinates. If you need the world coords, you'll have to apply the MatrixTransform (and the other transformation you might have along the nodepath to the root of the graph)
To calculate the bounding box in the coordinate frame of an osg::MatrixTransform t:
#include <osg/ComputeBoundsVisitor>
osg::ComputeBoundsVisitor cbv;
t->accept(cbv);
osg::BoundingBox bb = cbv.getBoundingBox(); // in local coords.
You can then e.g. get the size and center in local coordinates:
osg::Vec3 size(bb.xMax() - bb.xMin(), bb.yMax() - bb.yMin(), bb.zMax() - bb.zMin());
osg::Vec3 center(bb.xMin() + size.x()/2.0f, bb.yMin() + size.y()/2.0f, bb.zMin() + size.z()/2.0f);
To convert to world coordinates, you need to get the localToWorld transform for the nodepath from t to the root, and then you can transform any of the coordinates into the world reference frame:
osg::Matrix localToWorld = osg::computeLocalToWorld(t->getParentalNodePaths().front());
osg::Vec3 centerWorld = center * localToWorld;
Related
point[0] = (0,1,1)
point[1] = (1,1,1)
point[2] = (0,0,1)
point[3] = (1,0,1)
For examples below, each point above maps to an index in the visualization below.
0----------1
| |
| |
| |
3----------2
You can't.
If the points are not coplanar, it is even impossible to define an orientation.
If the points are coplanar, you can look at their plane from both sides.
If you want this information with respect to an observer, project the vertices to the viewing plane (to reduce to 2D) and compute the algebraic area by the shoelace formula. The sign tells you the orientation.
You can but only in respect to some direction ...
taking your example if you are looking on it as is its CW however if you look at it from behind its CCW ... if you look from sides (perpendicularly so the face is projected to line) we can not tell.
So the usual approach is to do a cross product of the vertices. This will give you normal vector of the face but the direction is determined by the CW/CCW. Now the result compare to reference direction by dot product. So:
vec3 p0,p1,p2; // 3 vertexes of your face not on single line
vec3 dir; // reference direction
float winding = dot( cross( p1-p0 , p2-p1 ) , dir )
Now the winding sign tells you if the face is CW or CCW in respect to dir. Which one it is depends on your notations. However this works only for convex polygons (or in convex part of concave ones) !!!
In computer graphics the reference direction is usually camera view direction. So once in camera local space coordinate system the direction is z axis so inspecting the z coordinate of the cross product is enough. This is known as face culling (skipping polygons with wrong winding in GL set by GL_CULL_FACE)...
You can look at the reference dir as an axis of rotation aorund which you are determining if the points are CW or CCW ...
I am trying various visualizations for an Igraph in R (version.3.3.1).
Currently my visualizing is as shown as below, 2 nodes (blue and green) in circular layout.
Circular Layout
visNetwork(data$nodes,data$edges) %>% visIgraphLayout(layout="layout_in_circle")
Now I want to have a semicircle structure instead of a full circle as in the pic. All blue nodes form a semicircle, green nodes another semicircle. Each semicircle separated by a small distance as well. How can i achieve this. I found grid package has an option for semicircle, but i couldnt make it work with igraph. Please provide some pointers.
The layout argument accepts an arbitrary matrix with two columns and N rows if your graph has N vertices; all you need to do is to create a list of coordinates that correspond to a semicircle. You can make use of the fact that a vertex at angle alpha around a circle with radius r centered at (0, 0) is to be found at (r * cos(alpha), r * sin(alpha)). Since you are using R, alpha should be specified in radians, spaced evenly between 0 and pi (which corresponds to 180 degrees).
I want to write a transformer for converting svg basic types into worldwind shapes like polyline, polygon etc.
Since svg gives coordinates on canvas and I need to convert them to position, I am looking for a method in the api which can do this.
I see there is Vec4 for point but I am not sure how it relates to canvas coordinates.
Will it be a correct representation if for say point x=100,y=100, I do the following
Vec4 vec=new Vec4(x,y,0.0f);
Globe g=view.getGlobe();
Position p=g.computePositionFromPoint(vec);
Will this correspond position will be the position at point(x=100,y=100) on the screen. If i bring my mouse to x=100 and y=100 for the current view the position should be p.
Globe.computePositionFromPoint(vec) takes Cartesian coordinates as input, not screen coordinates. To go from screen coordinates to position you want something like this:
Vec4 screenCoords = new Vec4(x,y);
Vec4 cartesian = view.unProject(screenCoords);
Globe g=view.getGlobe();
Position p=g.computePositionFromPoint(cartesian);
A better way to do this would be:
Point screenPoint = dragContext.getPoint();
View view = dragContext.getView();
double latitude = view.computePositionFromScreenPoint(screenPoint.getX(), screenPoint.getY()).getLatitude().degrees;
double longitude = view.computePositionFromScreenPoint(screenPoint.getX(), screenPoint.getY()).getLongitude().degrees;
Position objectPosition = new Position(LatLon.fromDegrees(latitude, longitude), 0);
Then you can set the position of the object to objectPosition.
If you want to go from canvas to 3D, you're probably looking for View#computeRayFromScreenPoint(double, double). This will give you a ray (in Vec4 format) from the eye through the given pixel on the canvas. You'll have to intersect this ray with something to generate a meaningful 3D point, since each pixel is an infinite line in space.
about the Globe#computePointFromPosition:
Position - A latitude, longitude, altitude position, with the altitude being in MSL (alt above sea level)
Vec4 - Just a directional vector (often used for Cartesian coordinates). For the Cartesian coordinate system, the z-axis comes out of the earth through 0deg/0deg lat/lon, x-axis through 0deg/90deg, and y-axis through the north pole.
As Chris mentioned, The Globe#computePointFromPosition() and Globe#computePointFromPosition() switch from a Position to a Cartesian Vec4 and vice versa, using your globe as the reference frame.
I have a problem with creating 3D cylinders (without OpenGL). I understand that a mesh is used to create the cylinder surface and triangle fans are used to create the top and bottom caps. I have already implemented the mesh but not the planar triangle fans, so currently my 3D object looks like a cylinder without the bottom and top cap.
I believe this is what I need to do in order to create the bottom and top caps. First, find the center point of the cylinder mesh. Second, find the vertices of the mesh. Third, using the center point and the 2 vertex points, create the triangle. Fourth, repeat the steps until a planar circle is created.
Are the above steps a sufficient way of creating the caps or is there a better way? And how do I find the vertices of the mesh so I can create the triangle fans?
First some notes:
you did not specify your platform
gfx interface
language
not enough info about your cylinder either
is it axis aligned?
what coordinate system (Cartesian/orthogonal/orthonormal)?
need additional dimensions like color or texture coordinates?
So I can provide just generic info then
Axis aligned cylinder
choose the granularity N
number of points along your cap's circle
usually 20-36 is OK but if you need higher precision then sometimes you need even 1000 points or more
all depends on the purpose,zoom, angle and distance of view ...
and performance issues
for now let N=32
you need BR (boundary representation)
you did not specify gfx interface but your text implies BR model (surface polygons)
also no pivot point position so I will choose middle point of cylinder to be (0,0,0)
z axis will be the height of cylinder
and the caps will be coplanar with xy plane
so for cylinder is enough set of 2 rings (caps)
so the points can be defined in C++ like this:
const int N=32; // mesh complexity
double p0[N][3],p1[N][3]; // rings`
double a,da,c,s,r,h2; // some temp variables
int i;
r =50.0; // cylinder radius
h2=100.0*0.5; // half height of cyliner
da=M_PI/double(N-1);
for (a=0.0,i=0;i<N;i++,a+=da)
{
c=r*cos(a);
s=r*sin(a);
p0[i][0]=c;
p0[i][1]=s;
p0[i][2]=+h2;
p1[i][0]=c;
p1[i][1]=s;
p1[i][2]=-h2;
}
the ring points are as closed loop (p0[0]==p0[N-1])
so you do not need additional lines to handle it...
now how to draw
cant write the code for unknown api but
'mesh' is something like QUAD_STRIP I assume
so just add points to it in this order:
QUAD_STRIP = { p0[0],p1[0],p0[1],p1[1],...p0[N-1],p1[N-1] };
if you have inverse normal problem then swap p0/p1
now for the fans
you do not need the middle point (unless you have interpolation aliasing issues)
so similar:
TRIANGLE_FAN0 = { p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { p1[0],p1[1],...p1[N-1] };
if you still want the middle point then:
TRIANGLE_FAN0 = { (0.0,0.0,+h2),p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { (0.0,0.0,-h2),p1[0],p1[1],...p1[N-1] };
if you have inverse normal problem then reverse the points order (middle point stays where it is)
Not axis aligned cylinder?
just use transform matrix on your p0[],p1[] point lists to translate/rotate to desired position
the rest stays the same
Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).