DirectX11 problem with spacing while drawing lines - graphics

I am facing an issue with blank spaces in my shapes (ex. triangle and circle). How should I deal with that problem?
Might it be because of antialiasing missing (if so how should i properly enable it for drawing such shapes)? Or is there a problem in my drawing method itself?
void graphics::draw_line_internal(int x0, int y0, int x1, int y1, rgba color)
{
if (this->initilized == false)
return;
UINT viewportNumber = 1;
D3D11_VIEWPORT vp;
this->device_context->RSGetViewports(&viewportNumber, &vp);
float xx0 = 2.0f * (x0 - 0.5f) / vp.Width - 1.0f;
float yy0 = 1.0f - 2.0f * (y0 - 0.5f) / vp.Height;
float xx1 = 2.0f * (x1 - 0.5f) / vp.Width - 1.0f;
float yy1 = 1.0f - 2.0f * (y1 - 0.5f) / vp.Height;
D3D11_MAPPED_SUBRESOURCE map_data;
this->device_context->Map(this->vertex_buffer.Get(), NULL, D3D11_MAP_WRITE_DISCARD, NULL, &map_data);
COLOR_VERTEX* v = NULL;
D3DXCOLOR cashed_color = color.convert_to_D3DXCOLOR();
v = (COLOR_VERTEX*)map_data.pData;
{
v[0] = { D3DXVECTOR3(xx0, yy0, 0) , cashed_color };
v[1] = { D3DXVECTOR3(xx1, yy1, 0) , cashed_color };
}
this->device_context->Unmap(this->vertex_buffer.Get(), NULL);
const float blend_factor[4] = { 0.f, 0.f, 0.f, 0.f };
this->device_context->OMSetBlendState(this->blend_state.Get(), blend_factor, 0xffffffff);
this->device_context->OMSetDepthStencilState(this->depth_stencil_state.Get(), 0);
this->device_context->IASetInputLayout(this->input_layout.Get());
UINT stride = sizeof(COLOR_VERTEX);
UINT offset = 0;
this->device_context->IASetVertexBuffers(0, 1, this->vertex_buffer.GetAddressOf(), &stride, &offset);
this->device_context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP);
this->device_context->VSSetShader(this->vertex_shader.Get(), NULL, 0);
this->device_context->PSSetShader(this->pixel_shader.Get(), NULL, 0);
this->device_context->OMSetRenderTargets(1, this->render_target_view.GetAddressOf(), NULL);
this->device_context->Draw(2, 0);
}
Line points from the screenshot are (20, 150) and (70, 20).

The most likely cause of your problem would be that the input data is incorrect. For the correct behaviour it must be guaranteed that xx1, yy1 of one segment is the same as xx0, yy0 of the next segment. However if I understand correctly the first image you provided was produced with one draw call? This seems to rule out this possibility.
In general black spots could be caused by garbage values in the depth stencil, but you are not using one :D . Additionally some of the segments in your images also have an offset to them. So this possibility is ruled out as well.
By the way the whole point of D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP is to draw the entire curve at once. Just generate all points at once (without duplication), put them in a buffer, copy them to the GPU, and call Draw once. No funny business.
At this point I myself am quite lost in this whole situation. The only thing I can recommend is to disable everything but the essentials - shaders, input layout, vertex buffer and a render target - and look for a minimal repro.

Related

Simulate virtual camera which preserves color information

I have a virtual scanner that generates a 2.5-D view of a point cloud (i.e. a 2D-projection of a 3D point cloud) depending on camera position. I'm using the vtkCamera.GetProjectionTransformMatrix() to get transformation matrix from world/global to camera coordinates.
However, if the input point cloud has color information for points I would like to preserve it.
Here are the relevant lines:
boost::shared_ptr<pcl::visualization::PCLVisualizer> vis; // camera location, viewpoint and up direction for vis were already defined before
vtkSmartPointer<vtkRendererCollection> rens = vis->getRendererCollection();
vtkSmartPointer<vtkRenderWindow> win = vis->getRenderWindow();
win->SetSize(xres, yres); // xres and yres are predefined resolutions
win->Render();
float dwidth = 2.0f / float(xres),
dheight = 2.0f / float(yres);
float *depth = new float[xres * yres];
win->GetZbufferData(0, 0, xres - 1, yres - 1, &(depth[0]));
vtkRenderer *ren = rens->GetFirstRenderer();
vtkCamera *camera = ren->GetActiveCamera();
vtkSmartPointer<vtkMatrix4x4> projection_transform = camera->GetProjectionTransformMatrix(ren->GetTiledAspectRatio(), 0, 1);
Eigen::Matrix4f mat1;
for (int i = 0; i < 4; ++i)
for (int j = 0; j < 4; ++j)
mat1(i, j) = static_cast<float> (projection_transform->Element[i][j]);
mat1 = mat1.inverse().eval();
Now, mat1 is used to transform coordinates to camera-view:
pcl::PointCloud<pcl::PointXYZ>::Ptr &cloud;
int ptr = 0;
for (int y = 0; y < yres; ++y)
{
for (int x = 0; x < xres; ++x, ++ptr)
{
pcl::PointXYZ &pt = (*cloud)[ptr];
if (depth[ptr] == 1.0)
{
pt.x = pt.y = pt.z = std::numeric_limits<float>::quiet_NaN();
continue;
}
Eigen::Vector4f world_coords(dwidth * float(x) - 1.0f,
dheight * float(y) - 1.0f,
depth[ptr],
1.0f);
world_coords = mat1 * world_coords;
float w3 = 1.0f / world_coords[3];
world_coords[0] *= w3;
world_coords[1] *= w3;
world_coords[2] *= w3;
pt.x = static_cast<float> (world_coords[0]);
pt.y = static_cast<float> (world_coords[1]);
pt.z = static_cast<float> (world_coords[2]);
}
}
I want the virtual scanner to return pcl::PointXYZRGB point cloud with color information.
Any help on how to implement this from someone experienced in VTK would save some of my time.
It's possible that I missed a relevant question already asked here - in that case, please point me to it. Thanks.
If I understand correctly that you want to get the color in which the point was rendered into the win RenderWindow, you should be able to get the data from the rendering buffer by calling
float* pixels = win->GetRGBAPixelData(0, 0, xres - 1, yres - 1, 0/1).
This should give you each pixel of the rendering buffer as an array in the format [R0, G0, B0, A0, R1, G1, B1, A1, R2....]. The last parameter which I wrote as 0/1 is whether the data should be taken from front or back opengl buffers. I presume by default double buffering should be on, so then you want to read from back buffer (use '1'), but I am not sure.
Once you have that, you can get the color in your second loop for all pixels that belong to points (depth[ptr] != 1.0) as:
pt.R = pixels[4*ptr];
pt.G = pixels[4*ptr + 1];
pt.B = pixels[4*ptr + 2];
You should call win->ReleaseRGBAPixelData(pixels) once you're done with it.

Processing fft crash

Weird thing. I keep getting processing or java to crash with this code which is based on a sample code from the processing website.
On pc it doesn't work at all, on one mac it works for 5 seconds until it crushes and on another mac it just crust and gives me this:
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: RtApiCore::probeDeviceOpen: the device (2) does not support the requested channel count.
Could not run the sketch (Target VM failed to initialize).
Do you think it's a problem with the library or with the code?
If it's a problem with the library, could you recommend the best sound library to do something like this?
Thank you :)
import processing.sound.*;
FFT fft;
AudioIn in;
int bands = 512;
float[] spectrum = new float[bands];
void setup() {
size(900, 600);
background(255);
// Create an Input stream which is routed into the Amplitude analyzer
fft = new FFT(this, bands);
in = new AudioIn(this, 0);
// start the Audio Input
in.start();
// patch the AudioIn
fft.input(in);
}
void draw() {
background(255);
int midPointW = width/2;
int midPointH = height/2;
float angle = 1;
fft.analyze(spectrum);
//float radius = 200;
for(int i = 0; i < bands; i++){
// create the actions for placing points on a circle
float radius = spectrum[i]*height*10;
//float radius = 10;
float endX = midPointW+sin(angle) * radius*10;
float endY = midPointH+cos(angle) * radius*10;
float startX = midPointW+sin(angle) * radius*5;
float startY = midPointH+cos(angle) * radius*5;
// The result of the FFT is normalized
// draw the line for frequency band i scaling it up by 5 to get more amplitude.
line( startX, startY, endX, endY);
angle = angle + angle;
println(endX, "" ,endY);
// if(angle > 360){
// angle = 0;
// }
}
}
If you print the values you use like angle and start x,y you'll notice that:
start/end x,y values become NaN(not a number - invalid)
angle quickly goes to Infinity (but not beyond)
One of the main issues is this line:
angle = angle + angle;
You're exponentially increasing this value which you probably don't want.
Additionally, bare in mind trigonometric functions such as sin() and cos() use radians not degrees, so values tend to be small. You can constrain the values to 360 degrees or TWO_PI radians using the modulo operator(%) or the constrain() function:
angle = (angle + 0.01) % TWO_PI;
You were very close though as your angle > 360 check shows it. Not sure why you've left that commented out.
Here's your code with the tweak and comments
import processing.sound.*;
FFT fft;
AudioIn in;
int bands = 512;
float[] spectrum = new float[bands];
void setup() {
size(900, 600);
background(255);
// Create an Input stream which is routed into the Amplitude analyzer
fft = new FFT(this, bands);
in = new AudioIn(this, 0);
// start the Audio Input
in.start();
// patch the AudioIn
fft.input(in);
}
void draw() {
background(255);
int midPointW = width/2;
int midPointH = height/2;
float angle = 1;
fft.analyze(spectrum);
//float radius = 200;
for (int i = 0; i < bands; i++) {
// create the actions for placing points on a circle
float radius = spectrum[i] * height * 10;
//float radius = 10;
float endX = midPointW + (sin(angle) * radius * 10);
float endY = midPointH + (cos(angle) * radius * 10);
float startX = midPointW + (sin(angle) * radius * 5);
float startY = midPointH + (cos(angle) * radius * 5);
// The result of the FFT is normalized
// draw the line for frequency band i scaling it up by 5 to get more amplitude.
line( startX, startY, endX, endY);
//angle = angle + angle;
angle = (angle + 0.01) % TWO_PI;//linearly increase the angle and constrain it to a 360 degrees (2 * PI)
}
}
void exit() {
in.stop();//try to cleanly stop the audio input
super.exit();
}
The sketch ran for more than 5 minutes but when closing the sketch I still encountered JVM crashes on OSX.
I haven't used this sound library much and haven't looked into it's internals, but it might be a bug.
If this still is causing problems, for pragmatic reasons I'd recommend installing a different Processing library for FFT sound analysis via Contribution Manager.
Here are a couple of libraries:
Minim - provides some nice linear and logarithmic averaging functions that can help in visualisations
Beads - feature rich but more Java like syntax. There's also a free book on it: Sonifying Processing
Both libraries provide FFT examples.

Smoothing pixel-by-pixel drawing in Processing

I picked up Processing today, and wrote a program to generate a double slit interference pattern. After tweaking with the values a little, it works, but the pattern generated is fuzzier than what is possible in some other programs. Here's a screenshot:
As you can see, the fringes are not as smooth at the edges as I believe is possible. I expect them to look like this or this.
This is my code:
// All quantities in mm
float slit_separation = 0.005;
float screen_dist = 50;
float wavelength = 5e-4f;
PVector slit1, slit2;
float scale = 1e+1f;
void setup() {
size(500, 500);
colorMode(HSB, 360, 100, 1);
noLoop();
background(255);
slit_separation *= scale;
screen_dist *= scale;
wavelength *= scale;
slit1 = new PVector(-slit_separation / 2, 0, -screen_dist);
slit2 = new PVector(slit_separation / 2, 0, -screen_dist);
}
void draw() {
translate(width / 2, height / 2);
for (float x = -width / 2; x < width / 2; x++) {
for (float y = -height / 2; y < height / 2; y++) {
PVector pos = new PVector(x, y, 0);
float path_diff = abs(PVector.sub(slit1, pos).mag() - PVector.sub(slit2, pos).mag());
float parameter = map(path_diff % wavelength, 0, wavelength, 0, 2 * PI);
stroke(100, 100, pow(cos(parameter), 2));
point(x, y);
}
}
}
My code is mathematically correct, so I am wondering if there's something wrong I am doing in transforming the physical values to pixels on screen.
I'm not totally sure what you're asking- what exactly do you expect it to look like? Would it be possible to narrow this down to a single line that's misbehaving instead of the nested for loop?
But just taking a guess at what you're talking about: keep in mind that Processing enables anti-aliasing by default. To disable it, you have to call the noSmooth() function. You can call it in your setup() function:
void setup() {
size(500, 500);
noSmooth();
//rest of your code
It's pretty obvious if you compare them side-by-side:
If that's not what you're talking about, please post an MCVE of just one or two lines instead of a nested for loop. It would also be helpful to include a mockup of what you'd expect versus what you're getting. Good luck!

Is there a faked antialiasing algorithm using the depth buffer?

Lately I implemented the FXAA algorithm into my OpenGL application. I haven't understand this algorithm completely by now but I know that it uses contrast data of the final image to selectively apply blurring. As a post processing effect that makes sense. B since I use deferred shading in my application I already have a depth texture of the scene. Using that it might be much easier and more precise to find edges for applying blur there.
So is there a known antialiasing algorithm using the depth texture instead of the final image to find the edges? By fakes I mean an antialiasing algorithm based on a pixel basis instead of a vertex basis.
After some research I found out that my idea is widely used already in deferred renderers. I decided to post this answer because I came up with my own implementation which I want to share with the community.
Based on the gradient changes of the depth and the angle changes of the normals, there is blurring applied to the pixel.
// GLSL fragment shader
#version 330
in vec2 coord;
out vec4 image;
uniform sampler2D image_tex;
uniform sampler2D position_tex;
uniform sampler2D normal_tex;
uniform vec2 frameBufSize;
void depth(out float value, in vec2 offset)
{
value = texture2D(position_tex, coord + offset / frameBufSize).z / 1000.0f;
}
void normal(out vec3 value, in vec2 offset)
{
value = texture2D(normal_tex, coord + offset / frameBufSize).xyz;
}
void main()
{
// depth
float dc, dn, ds, de, dw;
depth(dc, vec2( 0, 0));
depth(dn, vec2( 0, +1));
depth(ds, vec2( 0, -1));
depth(de, vec2(+1, 0));
depth(dw, vec2(-1, 0));
float dvertical = abs(dc - ((dn + ds) / 2));
float dhorizontal = abs(dc - ((de + dw) / 2));
float damount = 1000 * (dvertical + dhorizontal);
// normals
vec3 nc, nn, ns, ne, nw;
normal(nc, vec2( 0, 0));
normal(nn, vec2( 0, +1));
normal(ns, vec2( 0, -1));
normal(ne, vec2(+1, 0));
normal(nw, vec2(-1, 0));
float nvertical = dot(vec3(1), abs(nc - ((nn + ns) / 2.0)));
float nhorizontal = dot(vec3(1), abs(nc - ((ne + nw) / 2.0)));
float namount = 50 * (nvertical + nhorizontal);
// blur
const int radius = 1;
vec3 blur = vec3(0);
int n = 0;
for(float u = -radius; u <= +radius; ++u)
for(float v = -radius; v <= +radius; ++v)
{
blur += texture2D(image_tex, coord + vec2(u, v) / frameBufSize).rgb;
n++;
}
blur /= n;
// result
float amount = mix(damount, namount, 0.5);
vec3 color = texture2D(image_tex, coord).rgb;
image = vec4(mix(color, blur, min(amount, 0.75)), 1.0);
}
For comparison, this is the scene without any anti-aliasing.
This is the result with anti-aliasing applied.
You may need to view the images at their full resolution to judge the effect. In my view the result is adequate for the simple implementation. The best thing is that there are nearly no jagged artifacts when the camera moves.

How to draw partial-ellipse in CF? (Graphics.DrawArc in full framework)

I hope there will be an easy answer, as often times, something stripped out of Compact Framework has a way of being performed in a seemingly roundabout manner, but works just as well as the full framework (or can be made more efficient).
Simply put, I wish to be able to do a function similar to System.Drawing.Graphics.DrawArc(...) in Compact Framework 2.0.
It is for a UserControl's OnPaint override, where an arc is being drawn inside an ellipse I already filled.
Essentially (close pseudo code, please ignore imperfections in parameters):
FillEllipse(ellipseFillBrush, largeEllipseRegion);
DrawArc(arcPen, innerEllipseRegion, startAngle, endAngle); //not available in CF
I am only drawing arcs in 90 degree spaces, so the bottom right corner of the ellipse's arc, or the top left. If the answer for ANY angle is really roundabout, difficult, or inefficient, while there's an easy solution for just doing just a corner of an ellipse, I'm fine with the latter, though the former would help anyone else who has a similar question.
I use this code, then use FillPolygon or DrawPolygon with the output points:
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int Radius, int xOffset, int yOffset, int LineWidth)
{
if(PointsInArc < 0)
PointsInArc = 0;
if(PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc * 2];
int xo;
int yo;
int xi;
int yi;
float degs;
double rads;
for(int p = 0 ; p < PointsInArc ; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)(Radius * Math.Sin(rads));
yo = (int)(Radius * Math.Cos(rads));
xi = (int)((Radius - LineWidth) * Math.Sin(rads));
yi = (int)((Radius - LineWidth) * Math.Cos(rads));
xo += (Radius + xOffset);
yo = Radius - yo + yOffset;
xi += (Radius + xOffset);
yi = Radius - yi + yOffset;
points[p] = new Point(xo, yo);
points[(PointsInArc * 2) - (p + 1)] = new Point(xi, yi);
}
return points;
}
I had this exactly this problem and me and my team solved that creating a extension method for compact framework graphics class;
I hope I could help someone, cuz I spent a lot of work to get this nice solution
Mauricio de Sousa Coelho
Embedded Software Engineer
public static class GraphicsExtension
{
// Implements the native Graphics.DrawArc as an extension
public static void DrawArc(this Graphics g, Pen pen, float x, float y, float width, float height, float startAngle, float sweepAngle)
{
//Configures the number of degrees for each line in the arc
int degreesForNewLine = 5;
//Calculates the number of points in the arc based on the degrees for new line configuration
int pointsInArc = Convert.ToInt32(Math.Ceiling(sweepAngle / degreesForNewLine)) + 1;
//Minimum points for an arc is 3
pointsInArc = pointsInArc < 3 ? 3 : pointsInArc;
float centerX = (x + width) / 2;
float centerY = (y + height) / 2;
Point previousPoint = GetEllipsePoint(x, y, width, height, startAngle);
//Floating point precision error occurs here
double angleStep = sweepAngle / pointsInArc;
Point nextPoint;
for (int i = 1; i < pointsInArc; i++)
{
//Increments angle and gets the ellipsis associated to the incremented angle
nextPoint = GetEllipsePoint(x, y, width, height, (float)(startAngle + angleStep * i));
//Connects the two points with a straight line
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
previousPoint = nextPoint;
}
//Garantees connection with the last point so that acumulated errors cannot
//cause discontinuities on the drawing
nextPoint = GetEllipsePoint(x, y, width, height, startAngle + sweepAngle);
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
}
// Retrieves a point of an ellipse with equation:
private static Point GetEllipsePoint(float x, float y, float width, float height, float angle)
{
return new Point(Convert.ToInt32(((Math.Cos(ToRadians(angle)) * width + 2 * x + width) / 2)), Convert.ToInt32(((Math.Sin(ToRadians(angle)) * height + 2 * y + height) / 2)));
}
// Converts an angle in degrees to the same angle in radians.
private static float ToRadians(float angleInDegrees)
{
return (float)(angleInDegrees * Math.PI / 180);
}
}
Following up from #ctacke's response, which created an arc-shaped polygon for a circle (height == width), I edited it further and created a function for creating a Point array for a curved line, as opposed to a polygon, and for any ellipse.
Note: StartAngle here is NOON position, 90 degrees is the 3 o'clock position, so StartAngle=0 and SweepAngle=90 makes an arc from noon to 3 o'clock position.
The original DrawArc method has the 3 o'clock as 0 degrees, and 90 degrees is the 6 o'clock position. Just a note in replacing DrawArc with CreateArc followed by DrawLines with the resulting Point[] array.
I'd play with this further to change that, but why break something that's working?
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int ellipseWidth, int ellipseHeight, int xOffset, int yOffset)
{
if (PointsInArc < 0)
PointsInArc = 0;
if (PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc];
int xo;
int yo;
float degs;
double rads;
//could have WidthRadius and HeightRadius be parameters, but easier
// for maintenance to have the diameters sent in instead, matching closer
// to DrawEllipse and similar methods
double radiusW = (double)ellipseWidth / 2.0;
double radiusH = (double)ellipseHeight / 2.0;
for (int p = 0; p < PointsInArc; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)Math.Round(radiusW * Math.Sin(rads), 0);
yo = (int)Math.Round(radiusH * Math.Cos(rads), 0);
xo += (int)Math.Round(radiusW, 0) + xOffset;
yo = (int)Math.Round(radiusH, 0) - yo + yOffset;
points[p] = new Point(xo, yo);
}
return points;
}

Resources