When I get the Meshes from SpatialAwarenessMeshObserver, I found the Mesh.Colors is NULL.Can I get the color of real world through MTRK?
The triangle meshes provided by SpatialAwareness only representing the geometry of the environment, and the actual color cannot be sampled.
Related
It looks like you can pass in multiple things in the vertex buffer in addition to the position, such as color. What are a list of all the attributes used by production game systems in complex environments? What is a good example? Some things that come to mind:
velocity/torque
mass/density
temp/energy
emission/absorption
Is there a common set of things?
There is no set of common things except positions, texture coordinates, and normals. Maybe also vertex colors, binormals and tangets. Otherwise everything else is game specific.
Most games don't use shaders for physics so velocity, torque, mass, density, temp, energy, emission, absorption, are not common inputs to a shader.
Though the per-vertex attributes are very game specific, I am listing down a few based on categories.
Geometric data
Position
Normals
Texture coordinates (multiple, based on number of textures)
Tangent, Bitagent (For normal map calculations)
Joint weight (joint id, weight)
Joint transform matrix (transformation matrix for joints)
level of details (tessellation)
Material data
Vertex color
reflection value
refraction value
Various light info (Emissive, ambient and other methods)
Physics data
mass / density
force
velocity
Particle data
index
age
lifetime
size
velocity
angular velocity
Please feel free to keep updating this space.
I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.
Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...
I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.