No Intersection Area for geographic coordinates with Boost Geometry - geometry

I need to check whether two points (GPS coordinates) q, m are in the same side or on the opposite side or co linear to a line (great circle segment) (p,t). I know that q is not co linear to (p, t). I didn't find and any direct function to use in the boost.geometry library. So I tried to calculate it in a different way.
I construct two triangles (p, q, t) and (p, m, t). Then I intersect these two and check the area of the intersection polygon. Following is my code.
typedef boost::geometry::model::point<
double, 2, boost::geometry::cs::spherical_equatorial<boost::geometry::degree>
> geo_point;
typedef boost::geometry::model::polygon<geo_point> geo_polygon;
geo_point p(110.48316, 29.05043);
geo_point q(110.48416, 29.05104);
geo_point t(110.48416, 29.05228);
geo_point m(110.48408, 29.05079);
geo_polygon ut, es;
boost::geometry::append(ut.outer(), p);
boost::geometry::append(ut.outer(), q);
boost::geometry::append(ut.outer(), t);
boost::geometry::append(ut.outer(), p);
boost::geometry::append(es.outer(), p);
boost::geometry::append(es.outer(), m);
boost::geometry::append(es.outer(), t);
boost::geometry::append(es.outer(), p);
std::list<geo_point> intersection_points;
boost::geometry::intersection(ut, es, intersection_points);
std::cout << intersection_points.size() << std::endl;
std::vector<geo_polygon> intersection_polygons;
boost::geometry::intersection(ut, es, intersection_polygons);
std::cout << intersection_polygons.size() << std::endl;
Live at cpp.sh
If we plot these two triangles we can clearly see that they intersects on 3 vertices yielding another triangle in the intersected region.
The above code correctly returns the number of intersected points, but it doesn't return any intersection polygon.
3
0
I have tried using geographic instead of using spherical_equatorial coordinate system. But got the same results. Am I missing something ? or this is a problem in Boost.Geometry

To ensure that polygons are closed and oriented as needed, apply correct. Inserting
boost::geometry::correct(ut);
boost::geometry::correct(es);
gives result 1 polygon for your test

Related

Making a metric between colors (perception model) with "difference"

Take a look at the image below. If you're not color blind, you should see some A's and B's. There are 3 A's and 3 B's in the image, and they all have one thing in common: Their color is the background + 10% of value, saturation and hue, in that order. For most people, the center letters are very hard to see - saturation doesn't do much, it seems!
This is a bit troublesome though because I'm making some character recognition software, and I'm filtering the image based on known foreground and background colors. But sometimes these are quite close, while the image is noisy. In order to decide whether a pixel belongs to a letter or to the background, my program checks Euclidean RGB distance:
(r-fr)*(r-fr) + (g-fg)*(g-fg) + (b-fb)*(b*fb) <
(r-br)*(r-br) + (g-bg)*(g-bg) + (b-bb)*(b*bb)
This works okay, but for close backgrounds and foregrounds, it works pretty bad sometimes.
Are there some better metrics to look for? I've looked into color perception models but those mostly model brightness rather than perceptive difference which I'm looking for. Maybe one that models saturation as less effective, and certain hue differences also? Any pointers to some interesting metrics would be very useful.
As was mentioned in the comments, the answer is using a perceptual color space, but I thought I'd throw together a visual example of how the edge detection behaves in the two color spaces. (Code is at the end.) In both cases, the Sobel edge detection is performed on the 3-channel color image, and then the result is flattened to gray scale.
RGB space:
L*a*b space (image is logarithmic, as the edges on the third letters are much more significant than the edges on the first letters, which are more significant than the edges on the second letters):
OpenCV C++ code:
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "iostream"
using namespace cv;
using namespace std;
void show(const char *name, Mat &img, int dolog=0)
{
double minVal, maxVal;
minMaxLoc(img, &minVal, &maxVal);
cout << name << " " << "minVal : " << minVal << endl << "maxVal : " << maxVal << endl;
Mat draw;
if(dolog) {
Mat shifted, tmp;
add(img, minVal, shifted);
log(shifted, tmp);
minMaxLoc(tmp, &minVal, &maxVal);
tmp.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
} else {
img.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
}
namedWindow(name, CV_WINDOW_AUTOSIZE);
imshow(name, draw);
imwrite(name, draw);
}
int main( )
{
Mat src;
src = imread("AAABBB.png", CV_LOAD_IMAGE_COLOR);
namedWindow( "Original image", CV_WINDOW_AUTOSIZE );
imshow( "Original image", src );
Mat lab, gray;
cvtColor(src, lab, CV_BGR2Lab);
Mat sobel_lab, sobel_bgr;
Sobel(lab, sobel_lab, CV_32F, 1, 0);
Sobel(src, sobel_bgr, CV_32F, 1, 0);
Mat bgr_sobel_lab, gray_sobel_lab;
cvtColor(sobel_lab, bgr_sobel_lab, CV_Lab2BGR);
show("lab->bgr edges.png", bgr_sobel_lab, 1);
cvtColor(bgr_sobel_lab, gray_sobel_lab, CV_BGR2GRAY);
Mat gray_sobel_bgr;
cvtColor(sobel_bgr, gray_sobel_bgr, CV_BGR2GRAY);
show("lab edges.png", gray_sobel_lab, 1);
show("bgr edges.png", gray_sobel_bgr);
waitKey(0);
return 0;
}

GLSL practice midterm

I have this problem on a practice midterm that I don't understand.
void main(void){
int i;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
gl_Position.xy = gl_Position.yx;
EmitVertex();
}
EndPrimitive();
}
I have been reading documentation, and I think that this is part of a geometry shader, and I think it is inverting the x and y coordinates of each point, but I don't have any way to verify this. I tried checking it in a program and it made slight differences in the coloring of the scene, but it didn't seem to change the geometry at all, so if someone could help explain this that would be awesome. Thanks!
This is indeed a part of a geometry shader.
First part of the shader (ending with first EndPrimitive()) is simplest possible pass-through geometry shader that does absolutely nothing to the geometry.
Second part is almost the same, except for the swizzling with the xy. It duplicates geometry, but changing the x and y coordinates, so it effectively mirrors the image across the line that connects down-left and up-right corner of the screen.
So, geometry is being duplicated and mirrored across the diagonal of the screen.

2D Screen coordinate to 3D position Directx 9 / Box Select

I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!

Why is the transitive closure of matrix multiplication not working in this vertex shader?

This can probably be filed under premature optimization, but since the vertex shader executes on each vertex for each frame, it seems like something worth doing (I have a lot of vars I need to multiply before going to the pixel shader).
Essentially, the vertex shader performs this operation to convert a vector to projected space, like so:
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
Since I'm doing this operation to multiple vectors in the shader, it made sense to combine those matrices into a cumulative matrix on the CPU and then flush that to the GPU for calculation, like so:
// VertexShader.hlsl
cbuffer ModelViewProjectionConstantBuffer : register (b0)
{
matrix model;
matrix view;
matrix projection;
matrix cummulative;
float3 eyePosition;
};
...
// Transform the vertex position into projected space.
pos = mul(pos, cummulative);
output.pos = pos;
And on the CPU:
// Renderer.cpp
// now is also the time to update the cummulative matrix
m_constantMatrixBufferData->cummulative =
m_constantMatrixBufferData->model *
m_constantMatrixBufferData->view *
m_constantMatrixBufferData->projection;
// NOTE: each of the above vars is an XMMATRIX
My intuition was that there was some mismatch of row-major/column-major, but XMMATRIX is a row-major struct (and all of its operators treat it as such) and mul(...) interprets its matrix parameter as row-major. So that doesn't seem to be the problem but perhaps it still is in a way that I don't understand.
I've also checked the contents of the cumulative matrix and they appear correct, further adding to the confusion.
Thanks for reading, I'll really appreciate any hints you can give me.
EDIT (additional information requested in comments):
This is the struct that I am using as my matrix constant buffer:
// a constant buffer that contains the 3 matrices needed to
// transform points so that they're rendered correctly
struct ModelViewProjectionConstantBuffer
{
DirectX::XMMATRIX model;
DirectX::XMMATRIX view;
DirectX::XMMATRIX projection;
DirectX::XMMATRIX cummulative;
DirectX::XMFLOAT3 eyePosition;
// and padding to make the size divisible by 16
float padding;
};
I create the matrix stack in CreateDeviceResources (along with my other constant buffers), like so:
void ModelRenderer::CreateDeviceResources()
{
Direct3DBase::CreateDeviceResources();
// Let's take this moment to create some constant buffers
... // creation of other constant buffers
// and lastly, the all mighty matrix buffer
CD3D11_BUFFER_DESC constantMatrixBufferDesc(sizeof(ModelViewProjectionConstantBuffer), D3D11_BIND_CONSTANT_BUFFER);
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&constantMatrixBufferDesc,
nullptr,
&m_constantMatrixBuffer
)
);
... // and the rest of the initialization (reading in the shaders, loading assets, etc)
}
I write into the matrix buffer inside a matrix stack class I created. The client of the class calls Update() once they are done modifying the matrices:
void MatrixStack::Update()
{
// then update the buffers
m_constantMatrixBufferData->model = model.front();
m_constantMatrixBufferData->view = view.front();
m_constantMatrixBufferData->projection = projection.front();
// NOTE: the eye position has no stack, as it's kept updated by the trackball
// now is also the time to update the cummulative matrix
m_constantMatrixBufferData->cummulative =
m_constantMatrixBufferData->model *
m_constantMatrixBufferData->view *
m_constantMatrixBufferData->projection;
// and flush
m_d3dContext->UpdateSubresource(
m_constantMatrixBuffer.Get(),
0,
NULL,
m_constantMatrixBufferData,
0,
0
);
}
Given your code snippets, it should work.
Possible causes of your problem:
Have you tried the inverted multiplication: projection * view * model?
Are you sure you set correctly the cummulative (register index = 12, offset = 192) in your constant buffer
Same for eyePosition (register index = 12, offset = 256)?

GLSL - Front vs. Back faces of polygons

I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.

Resources