we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.
Related
I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
I've got a function to return any points at which a line segment intersects a circle (up to two results, but potentially zero):
bool Math::GetLineCircleIntersections(Point theCenter, float theRadius, Point theLineA, Point theLineB, Array<Point>& theResults)
{
theResults.Reset();
Point aBA=theLineB-theLineA;
Point aCA=theCenter-theLineA;
float aA=aBA.mX*aBA.mX+aBA.mY*aBA.mY;
float aBBy2=aBA.mX*aCA.mX+aBA.mY*aCA.mY;
float aC=aCA.mX*aCA.mX+aCA.mY*aCA.mY-theRadius*theRadius;
float aPBy2=aBBy2/aA;
float aQ=aC/aA;
float aDisc=aPBy2*aPBy2-aQ;
if (aDisc<0) return false;
float aTmpSqrt=(float)sqrt(aDisc);
float aABScalingFactor1=-aPBy2+aTmpSqrt;
float aABScalingFactor2=-aPBy2-aTmpSqrt;
int aRSpot=0;
if (aABScalingFactor1<=0.0f && aABScalingFactor1>=-1.0f) theResults[aRSpot++]=Point(theLineA.mX-aBA.mX*aABScalingFactor1,theLineA.mY-aBA.mY*aABScalingFactor1);
if (aDisc==0) return true;
if (aABScalingFactor2<=0.0f && aABScalingFactor2>=-1.0f) theResults[aRSpot++]=Point(theLineA.mX-aBA.mX*aABScalingFactor2,theLineA.mY-aBA.mY*aABScalingFactor2);
return true;
}
I want to convert this to a 3D line, with an infinite cylinder-- with the added complication that the 3D cylinder has a tilt axis. I understand that what I'm really doing is intersecting with a sphere that is centered on the cylinder center where the plane of the line cuts it... but... how do I do that? How do I choose the best point to center the sphere, and then having done that, what's my change to turn line->circle intersections into line->sphere?
(I have a vector class that is exactly like the point class)
(Edit) I did manage to convert to a sphere function, only to discover that duh, no, a sphere won't work because a line that's tilted will not enter and exit the same way it would enter and exit a cylinder.
So, question is the same-- how can I convert this to collide with an infinite cylinder given an origin and axis for the cylinder?
I do not think sphere is usable for this...
However why not convert your 3D line into 2D by projecting it on to plane paralel with the cylinder base.
So you got 3D line in form of 2 endpoint p0,p1 and cylinder in form any point on its axis p , its radius r and axis unit direction vector d.
You need 2 unit basis vectors u,v describing cylinder base
so exploit cross product and cylinder axis for example:
// set u as any unit and non paralel vector to d
u = (1,0,0)
if (abs(dot(u,d))>0.75) u=(0,1,0)
// v set as perpendicular to u,d
v = cross(d,u)
// and make u perpendicular to v,d too
u = cross(v,d)
Project the problem into 2D
p0' = vec2( p0*dot(p0,u) , p0*dot(p0,v) )
p1' = vec2( p1*dot(p1,u) , p1*dot(p1,v) )
p' = vec2( p *dot(p ,u) , p *dot(p ,v) )
Solve the problem
now you just use 2D points p0',p1',p' and solve your problem using function you already have...
So I'm trying to "outline" 3D objects. Standard problem, for which the answer is meant to be that you copy the mesh, color it the outline color, scale it up, and then set it to only render faces that are "pointed in the wrong direction" - for us that means setting side:THREE.BackSide in the material. Eg here https://stemkoski.github.io/Three.js/Outline.html
But see what happens for me
Here's what I'd like to make
I have a bunch of objects that are close together - they get "inside" one another's outline.
Any advice on what I should do? What I want to be seeing is everywhere on the rendered frame that these shapes touch the background or each other, there you have outline.
What do you want to happen? Is that one mesh in your example or is it a bunch of intersecting meshes. If it's a bunch of intersecting meshes do you want them to have one outline? What about other meshes? My point is you need some way to define which "groups" of meshes get a single outline if you're using multiple meshes.
For multiple meshes and one outline a common solution is to draw all the meshes in a single group to a render target to generate a silhouette, then post process the silhouette to expand it. Finally apply the silhouette to the scene. I don't know of a three.js example but the concept is explained here and there's also many references here
Another solution that might work, should be possible to move the outline shell back in Z so doesn't intersect. Either all the way back (Z = 1 in clip space) or back some settable amount. Drawing with groups so that a collection of objects in front has an outline that blocks a group behind would be harder.
For example if I take this sample that prisoner849 linked to
And change the vertexShaderChunk in OutlineEffect.js to this
var vertexShaderChunk = `
#include <fog_pars_vertex>
uniform float outlineThickness;
vec4 calculateOutline( vec4 pos, vec3 objectNormal, vec4 skinned ) {
float thickness = outlineThickness;
const float ratio = 1.0; // TODO: support outline thickness ratio for each vertex
vec4 pos2 = projectionMatrix * modelViewMatrix * vec4( skinned.xyz + objectNormal, 1.0 );
// NOTE: subtract pos2 from pos because BackSide objectNormal is negative
vec4 norm = normalize( pos - pos2 );
// ----[ added ] ----
// compute a clipspace value
vec4 pos3 = pos + norm * thickness * pos.w * ratio;
// do the perspective divide in the shader
pos3.xyz /= pos3.w;
// just return screen 2d values at the back of the clips space
return vec4(pos3.xy, 1, 1);
}
`;
It's easier to see if you remove all references to reflectionCube and set the clear color to white renderer.setClearColor( 0xFFFFFF );
Original:
After:
I want to write a transformer for converting svg basic types into worldwind shapes like polyline, polygon etc.
Since svg gives coordinates on canvas and I need to convert them to position, I am looking for a method in the api which can do this.
I see there is Vec4 for point but I am not sure how it relates to canvas coordinates.
Will it be a correct representation if for say point x=100,y=100, I do the following
Vec4 vec=new Vec4(x,y,0.0f);
Globe g=view.getGlobe();
Position p=g.computePositionFromPoint(vec);
Will this correspond position will be the position at point(x=100,y=100) on the screen. If i bring my mouse to x=100 and y=100 for the current view the position should be p.
Globe.computePositionFromPoint(vec) takes Cartesian coordinates as input, not screen coordinates. To go from screen coordinates to position you want something like this:
Vec4 screenCoords = new Vec4(x,y);
Vec4 cartesian = view.unProject(screenCoords);
Globe g=view.getGlobe();
Position p=g.computePositionFromPoint(cartesian);
A better way to do this would be:
Point screenPoint = dragContext.getPoint();
View view = dragContext.getView();
double latitude = view.computePositionFromScreenPoint(screenPoint.getX(), screenPoint.getY()).getLatitude().degrees;
double longitude = view.computePositionFromScreenPoint(screenPoint.getX(), screenPoint.getY()).getLongitude().degrees;
Position objectPosition = new Position(LatLon.fromDegrees(latitude, longitude), 0);
Then you can set the position of the object to objectPosition.
If you want to go from canvas to 3D, you're probably looking for View#computeRayFromScreenPoint(double, double). This will give you a ray (in Vec4 format) from the eye through the given pixel on the canvas. You'll have to intersect this ray with something to generate a meaningful 3D point, since each pixel is an infinite line in space.
about the Globe#computePointFromPosition:
Position - A latitude, longitude, altitude position, with the altitude being in MSL (alt above sea level)
Vec4 - Just a directional vector (often used for Cartesian coordinates). For the Cartesian coordinate system, the z-axis comes out of the earth through 0deg/0deg lat/lon, x-axis through 0deg/90deg, and y-axis through the north pole.
As Chris mentioned, The Globe#computePointFromPosition() and Globe#computePointFromPosition() switch from a Position to a Cartesian Vec4 and vice versa, using your globe as the reference frame.
For a project we are trying to make a circle into a line (and back again) while it is rotating along a linear path, much like a tire rotates and translates when rolling on a road, or a curled fore finger is extended and recurled into the palm.
In this Fiddle, I have a static SVG (the top circle) that rotates along the linear black path (which is above the circle, to mimic a finger extending) that is defined in the HTML.
I also use d3 to generate a "circle" that is made up of connected points (and can unfurl if you click on/in the circle thanks to #ChrisJamesC here ), and is translated and rotated
in the function moveAlongLine when you click on the purple Line:
function moveAlongLine() {
circle.data([lineData])
.attr("transform", "translate(78.5,0) rotate(-90, 257.08 70) ")
.duration(1000)
circle.on("click", transitionToCircle)
}
The first problem is that the .duration(1000) is not recognized and throws a Uncaught TypeError: Object [object Array] has no method 'duration' in the console, so there is a difference between the static definition of dur in SVG and dynamically setting it in JS/D3, but this is minor.
The other is should the transform attributes be abstracted from one another like in the static circle? in the static circle, the translate is one animation, and the rotation is another, they just have the same star and duration, so they animate together. How would you apply both in d3?
The challenge that I can not get, is how to let it unroll upwards(and also re-roll back), with the static point being the top center of the circle also being the same as the leftmost point on the line.
these seem better:
I should try to get the unfurl animation to occur while also rotating? This seems like it would need to be stepwise/sequential based...
Or Consider an octogon (defined as a path), and if it were to rotate 7 of the sides, then 6, then 5.... Do this for a rather large number of points on a polyhedron? (the circle only needs to be around 50 or so pixels, so 100 points would be more than enough) This is the middle example in the fiddle. Maybe doing this programmatically?
Or This makes me think of a different way: (in the case of the octogon), I could have 8 line paths (with no Z, just an additional closing point), and transition between them? Like this fiddle
Or anything todo with keyframes? I have made an animation in Synfig, but am unsure ho get it to SVG. The synfig file is at http://specialorange.org/filedrop/unroll.sifz if you can convert to SVG, but the xsl file here doesn't correctly convert it for me using xsltproc.
this seems really complicated but potential:
Define a path (likely a bézier curve with the same number of reference points) that the points follow, and have the reference points dynamically translate as well... see this for an concept example
this seems complicated and clunky:
Make a real circle roll by, with a growing mask in front of it, all while a line grows in length
A couple of notes:
The number of points in the d3 circle can be adjusted in the JS, it is currently set low so that you can see a bit of a point in the rendering to verify the rotation has occurred (much like the gradient is in the top circle).
this is to help students learn what is conserved between a number line and a circle, specifically to help learn fractions. For concept application, take a look at compthink.cs.vt.edu:3000 to see our prototype, and this will help with switching representations, to help you get a better idea...
I ended up using the same function that generates the circle as in the question, and did a bit of thinking, and it seemed like I wanted an animation that looked like a finger unrolling like this fiddle. This lead me to the math and idea needed to make it happen in this fiddle.
The answer is an array of arrays, with each nested array being a line in the different state, and then animate by interpolating between the points.
var circleStates = [];
for (i=0; i<totalPoints; i++){
//circle portion
var circleState = $.map(Array(numberOfPoints), function (d, j) {
var x = marginleft + radius + lineDivision*i + radius * Math.sin(2 * j * Math.PI / (numberOfPoints - 1));
var y = margintop + radius - radius * Math.cos(2 * j * Math.PI / (numberOfPoints - 1));
return { x: x, y: y};
})
circleState.splice(numberOfPoints-i);
//line portion
var lineState = $.map(Array(numberOfPoints), function (d, j) {
var x = marginleft + radius + lineDivision*j;
var y = margintop;
return { x: x, y: y};
})
lineState.splice(i);
//together
var individualState = lineState.concat(circleState);
circleStates.push(individualState);
}
and the animation(s)
function all() {
for(i=0; i<numberOfPoints; i++){
circle.data([circleStates[i]])
.transition()
.delay(dur*i)
.duration(dur)
.ease("linear")
.attr('d', pathFunction)
}
}
function reverse() {
for(i=0; i<numberOfPoints; i++){
circle.data([circleStates[numberOfPoints-1-i]])
.transition()
.delay(dur*i)
.duration(dur)
.ease("linear")
.attr('d', pathFunction)
}
}
(Note: This should be in comments but not enough spacing)
Circle Animation
Try the radial wipe from SO. Need to tweak it so angle starts at 180 and ends back at same place (line#4-6,19) and move along the X-axis (line#11) on each interation. Change the <path... attribute to suit your taste.
Line Animation Grow a line from single point to the length (perimeter) of the circle.
Sync both animation so that it appears good on all browsers (major headache!).