Draw Sphere - order of vertices - geometry

I would like draw sphere in pure OpenGL ES 2.0 without any engines. I write next code:
int GenerateSphere (int Slices, float radius, GLfloat **vertices, GLfloat **colors) {
srand(time(NULL));
int i=0, j = 0;
int Parallels = Slices ;
float tempColor = 0.0f;
int VerticesCount = ( Parallels + 1 ) * ( Slices + 1 );
float angleStep = (2.0f * M_PI) / ((float) Slices);
// Allocate memory for buffers
if ( vertices != NULL ) {
*vertices = malloc ( sizeof(GLfloat) * 3 * VerticesCount );
}
if ( colors != NULL) {
*colors = malloc( sizeof(GLfloat) * 4 * VerticesCount);
}
for ( i = 0; i < Parallels+1; i++ ) {
for ( j = 0; j < Slices+1 ; j++ ) {
int vertex = ( i * (Slices + 1) + j ) * 3;
(*vertices)[vertex + 0] = radius * sinf ( angleStep * (float)i ) *
sinf ( angleStep * (float)j );
(*vertices)[vertex + 1] = radius * cosf ( angleStep * (float)i );
(*vertices)[vertex + 2] = radius * sinf ( angleStep * (float)i ) *
cosf ( angleStep * (float)j );
if ( colors ) {
int colorIndex = ( i * (Slices + 1) + j ) * 4;
tempColor = (float)(rand()%100)/100.0f;
(*colors)[colorIndex + 0] = 0.0f;
(*colors)[colorIndex + 1] = 0.0f;
(*colors)[colorIndex + 2] = 0.0f;
(*colors)[colorIndex + (rand()%4)] = tempColor;
(*colors)[colorIndex + 3] = 1.0f;
}
}
}
return VerticesCount;
}
I'm drawing it with using next code:
glDrawArrays(GL_TRIANGLE_STRIP, 0, userData->numVertices);
Where userData->numVertices - VerticesCount from function GenerateSphere.
But on screen draws series triangles, these aren't sphere approximation!
I think, I need to numerate vertices and use OpenGL ES 2.0 function glDrawElements() (with array, contained number vertices). But series of triangles drawn on the screen is not a sphere approximation.
How can I draw sphere approximation? How specify order vertices (indices in OpenGL ES 2.0 terms)?

Before you start with anything in OpenGL ES, here is some advice:
Avoid bloating CPU/GPU performance
Removing intense cycles of calculations by rendering the shapes offline using another program will surely help. These programs will provide additional details about the shapes/meshes apart from exporting the resultant collection of points [x,y,z] comprising the shapes etc.
I went through all this pain way back, because I kept trying to search for algorithms to render spheres etc and then trying to optimize them. I just wanted to save your time in the future. Just use Blender and then your favorite programming language to parse the obj files that are exported from Blender, I use Perl. Here are the steps to render sphere: (use glDrawElements because the obj file contains the array of indices)
1) Download and install Blender.
2) From the menu, add sphere and then reduce the number of rings and segments.
3) Select the entire shape and triangulate it.
4) Export an obj file and parse it for the meshes.
You should be able to grasp the logic to render sphere from this file: http://pastebin.com/4esQdVPP. It is for Android, but the concepts are same.
Hope this helps.

I struggled with spheres and other geometric shapes. I worked at it a while and created an Objective-C class to create coordinates, normals, and texture coordinates both using indexed and non-indexed mechanisms, the class is here:
http://www.whynotsometime.com/Why_Not_Sometime/Code_Snippets.html
What is interesting to see the resulting triangles representing the geometry is to reduce the resolution (set the resolution property before generating the coordinates). Also, you can use GL_LINE_STRIP instead of GL_TRIANGLES to see a bit more.
I agree with the comment from wimp that since calculating the coordinates generally happens once, not many CPU cycles are used. Also, sometimes one does want to draw only a ball or world or...

Related

Select a screen-space point uniformly at random

I am working on implementing Alchemy AO and reading through their paper and they mention to sample each point by: considering a
Disk of radius r and center C that is parallel to the image plane,
select a screen-space point Q uniformly at random on it's project, and
then read a depth or position buffer to find the camera-space scene
point P = (xp, yp, z(Q)) on that ray.
I am wondering how you would go about selecting a screen-space point in the manor? I have made an attempt below but since my result appears quite incorrect, I think it's the wrong approach.
vec3 Position = depthToPosition(uvCoords);
int turns = 16;
float screen_radius = (sampleRadius * 100.0 / Position.z); //ball around the point
const float disk = (2.0 * PI) / turns;
ivec2 px = ivec2(gl_FragCoord.xy);
float phi = (30u * px.x ^ px.y + 10u * px.x * px.y); //per pixel hash for randdom rotation angle at a pixel
for (int i = 0; i < samples; ++i)
{
float theta = disk * float(i+1) + phi;
vec2 samplepoint = vec2(cos(theta), sin(theta));
}

What is the best algorithm for a non antialiased line and a aliased line

I'm new to shaders and I have been messing about with the website shadertoy. I'm trying to understand graphics (and the graphics pipeline) such as drawing lines, interpolation, rasterization, etc... I've written two line functions that return a color if the pixel processed is on the line. This is the shadertoy code here using fragment shaders
struct Vertex {
vec2 p;
vec4 c;
};
vec4 overlay(vec4 c1, vec4 c2) {
return vec4((1.0 - c2.w) * c1.xyz + c2.w * c2.xyz, 1.0);
}
vec4 drawLineA(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 r = floor(pos);
vec2 diff = b - a;
if (abs(diff.y) < abs(diff.x)) {
if (diff.x < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.y / diff.x;
float q = r.x - a.x;
if (floor(m * q + a.y) == r.y && a.x <= r.x && r.x <= b.x) {
float h = q / diff.x;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
} else {
if (diff.y < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.x / diff.y;
float q = r.y - a.y;
if (floor(m * q + a.x) == r.x && a.y <= r.y && r.y <= b.y) {
float h = q / diff.y;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
}
return vec4(0,0,0,0);
}
vec4 drawLineB(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 l = b - a;
vec2 r = pos - a;
float h = dot(l,r) / dot (l,l);
vec2 eC = a + h * l;
if (floor(pos) == floor(eC) && 0.0 <= h && h <= 1.0 ) {
return vec4((1.0 - h) * v1.c + h * v2.c);
}
return vec4(0,0,0,0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float t = iTime;
float r = 300.0;
Vertex v1 = Vertex(vec2(400,225), vec4(1,0,0,1));
Vertex v2 = Vertex(vec2(400.0 + r*cos(t) ,225.0 + r*sin(t)), vec4(0,1,0,1));
vec4 col = vec4(0,0,0,1);
col = overlay(col,drawLineA(v1, v2, fragCoord));
col = overlay(col,drawLineB(v1, v2, fragCoord));
// Output to screen
fragColor = col;
}
However, the lines that I have been using are not fast or using antialiasing. Which is the fastest algorithm for both antialiasing and aliasing lines, and how should I implement it thanks.
A fragment shader is really not the right approach for this, a lot on shadertoy is really just a toy / code-golfing showing solutions overcoming the limitations of the platform which are terribly inefficient in real-world scenarios.
All graphics APIs provide dedicated interfaces for drawing line segments just search for "API_NAME draw line" e.g. "webgl draw line". In cases where those do not suffice triangle strips with either MSAA or custom in-shader AA are used.
If you're really just looking for an efficient algorithm the wikipedia page has you covered on that.
As the other answer says shaders are not very good for this.
Line rasterization is done behind the scenes with HW interpolators on the gfx card these days. The shaders are invoked for each pixel of rendered primitive which in your case means its called for every pixel of screen and this all is invoked for each line you render which is massively slower than native way.
If you truly want to learn rasterization do this on CPU side instead. The best algo for lines depends on the computation HW architecture you are using.
For sequentional processing it is:
DDA this one is with subpixel precision
In the past Bresenham was faster but that is not true IIRC since x386 ...
For parallel processing you just compute distance of pixel to the line (more or less like you do now).
So if you insist on using shaders for this You can speed up things using geometry shader and process only fragment (pixels) that are near your line. See:
cubic curves rendering in GLSL
So simply you create OOBB around your line and render it by emitting 2 triangles per line then in fragment you compute the distance to line and set the color accordingly ...
For antialiasing you simply change the color for pixels on the last pixel edge distance. So if your line has half width w and distance of fragment to line is d then:
if (d>w) discard; // fragment too far
d=(w-d)/pixel_size; // distance from edge in pixels
frag_color = vec4(r,g,b,min(1.0,d)); // use transparency/blending
As you can see anti aliasing is just rendering with blending modulated by subpixel position/distance of pixel relative to rasterized object) the same technique can be used with DDA.
There are also ray tracing methods of rendering lines but they are pretty much the same as finding distance to line ... however instead of 2D pixel position you checking against 3D ray which slightly complicates the math.

Can I do random writes from a kernel without worrying about synchronization issues?

Consider a simple depth-of-field filter (my actual use case is similar). It loops over the image and scatters every pixel over a circular neighborhood of its. The radius of the neighborhood depends on the depth of the pixel - the closer the it is to the focal plane, the smaller the radius.
Note that I said "scatters" and not "gathers". In simpler image processing applications, you normally use the "gather" technique to perform an uniform Gaussian blur. IOW, you loop over the neighborhood of each pixel, and "gather" the nearby values into a weighted average. This works fine in that case, but if you make the blur kernel vary between pixels, while still using "gathering", you'll get a somewhat unrealistic effect. Such "space-variant filtering" scenarios are where "scattering" is different from "gathering".
To be clear: the scatter algo is something like this:
init resultImage to black
loop over sourceImage
var c = fetch current pixel from sourceImage
var toAdd = c * weight // weight < 1
loop over circular neighbourhood of current sourcepixel
add toAdd to current neighbor from resultImage
My question is: if I do a direct translation of this pseudocode to OpenCL, will there be synchronization issues due to different work-items simultaneously writing to the same output pixel?
Does the answer vary depending on whether I'm using Buffers or Images?
The course I'm reading suggests that there will be synchronization issues. But OTOH I read the source of Mandelbulber 1.21-2, which does a straightforward OpenCL DOF just like my above pseudocode, and it seems to work fine.
(the relevant code is in mandelbulber-opencl-1.21-2.orig/usr/share/cl/cl_DOF.cl and it's as follows)
//*********************************************************
// MANDELBULBER
// kernel for DOF effect
//
//
// author: Krzysztof Marczak
// contact: buddhi1980#gmail.com
// licence: GNU GPL v3.0
//
//*********************************************************
typedef struct
{
int width;
int height;
float focus;
float radius;
} sParamsDOF;
typedef struct
{
float z;
int i;
} sSortZ;
//------------------ MAIN RENDER FUNCTION --------------------
kernel void DOF(__global ushort4 *in_image, __global ushort4 *out_image, __global sSortZ *zBuffer, sParamsDOF p)
{
const unsigned int i = get_global_id(0);
uint index = p.height * p.width - i - 1;
int ii = zBuffer[index].i;
int2 scr = (int2){ii % p.width, ii / p.width};
float z = zBuffer[index].z;
float blur = fabs(z - p.focus) / z * p.radius;
blur = min(blur, 500.0f);
float4 center = convert_float4(in_image[scr.x + scr.y * p.width]);
float factor = blur * blur * sqrt(blur)* M_PI_F/3.0f;
int blurInt = (int)blur;
int2 scr2;
int2 start = (int2){scr.x - blurInt, scr.y - blurInt};
start = max(start, 0);
int2 end = (int2){scr.x + blurInt, scr.y + blurInt};
end = min(end, (int2){p.width - 1, p.height - 1});
for (scr2.y = start.y; scr2.y <= end.y; scr2.y++)
{
for(scr2.x = start.x; scr2.x <= end.x; scr2.x++)
{
float2 d = scr - scr2;
float r = length(d);
float op = (blur - r) / factor;
op = clamp(op, 0.0f, 1.0f);
float opN = 1.0f - op;
uint address = scr2.x + scr2.y * p.width;
float4 old = convert_float4(out_image[address]);
out_image[address] = convert_ushort4(opN * old + op * center);
}
}
}
No, you can't without worrying about synchronization. If two work items scatter to the same location without synchronization, you have a race condition and won't get the correct results. Same for both buffers and images. With buffers you could use atomics, but they can slow down your code, especially when there is contention (but even when not). AFAIK, read/write images don't have atomic operations.

How to draw partial-ellipse in CF? (Graphics.DrawArc in full framework)

I hope there will be an easy answer, as often times, something stripped out of Compact Framework has a way of being performed in a seemingly roundabout manner, but works just as well as the full framework (or can be made more efficient).
Simply put, I wish to be able to do a function similar to System.Drawing.Graphics.DrawArc(...) in Compact Framework 2.0.
It is for a UserControl's OnPaint override, where an arc is being drawn inside an ellipse I already filled.
Essentially (close pseudo code, please ignore imperfections in parameters):
FillEllipse(ellipseFillBrush, largeEllipseRegion);
DrawArc(arcPen, innerEllipseRegion, startAngle, endAngle); //not available in CF
I am only drawing arcs in 90 degree spaces, so the bottom right corner of the ellipse's arc, or the top left. If the answer for ANY angle is really roundabout, difficult, or inefficient, while there's an easy solution for just doing just a corner of an ellipse, I'm fine with the latter, though the former would help anyone else who has a similar question.
I use this code, then use FillPolygon or DrawPolygon with the output points:
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int Radius, int xOffset, int yOffset, int LineWidth)
{
if(PointsInArc < 0)
PointsInArc = 0;
if(PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc * 2];
int xo;
int yo;
int xi;
int yi;
float degs;
double rads;
for(int p = 0 ; p < PointsInArc ; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)(Radius * Math.Sin(rads));
yo = (int)(Radius * Math.Cos(rads));
xi = (int)((Radius - LineWidth) * Math.Sin(rads));
yi = (int)((Radius - LineWidth) * Math.Cos(rads));
xo += (Radius + xOffset);
yo = Radius - yo + yOffset;
xi += (Radius + xOffset);
yi = Radius - yi + yOffset;
points[p] = new Point(xo, yo);
points[(PointsInArc * 2) - (p + 1)] = new Point(xi, yi);
}
return points;
}
I had this exactly this problem and me and my team solved that creating a extension method for compact framework graphics class;
I hope I could help someone, cuz I spent a lot of work to get this nice solution
Mauricio de Sousa Coelho
Embedded Software Engineer
public static class GraphicsExtension
{
// Implements the native Graphics.DrawArc as an extension
public static void DrawArc(this Graphics g, Pen pen, float x, float y, float width, float height, float startAngle, float sweepAngle)
{
//Configures the number of degrees for each line in the arc
int degreesForNewLine = 5;
//Calculates the number of points in the arc based on the degrees for new line configuration
int pointsInArc = Convert.ToInt32(Math.Ceiling(sweepAngle / degreesForNewLine)) + 1;
//Minimum points for an arc is 3
pointsInArc = pointsInArc < 3 ? 3 : pointsInArc;
float centerX = (x + width) / 2;
float centerY = (y + height) / 2;
Point previousPoint = GetEllipsePoint(x, y, width, height, startAngle);
//Floating point precision error occurs here
double angleStep = sweepAngle / pointsInArc;
Point nextPoint;
for (int i = 1; i < pointsInArc; i++)
{
//Increments angle and gets the ellipsis associated to the incremented angle
nextPoint = GetEllipsePoint(x, y, width, height, (float)(startAngle + angleStep * i));
//Connects the two points with a straight line
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
previousPoint = nextPoint;
}
//Garantees connection with the last point so that acumulated errors cannot
//cause discontinuities on the drawing
nextPoint = GetEllipsePoint(x, y, width, height, startAngle + sweepAngle);
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
}
// Retrieves a point of an ellipse with equation:
private static Point GetEllipsePoint(float x, float y, float width, float height, float angle)
{
return new Point(Convert.ToInt32(((Math.Cos(ToRadians(angle)) * width + 2 * x + width) / 2)), Convert.ToInt32(((Math.Sin(ToRadians(angle)) * height + 2 * y + height) / 2)));
}
// Converts an angle in degrees to the same angle in radians.
private static float ToRadians(float angleInDegrees)
{
return (float)(angleInDegrees * Math.PI / 180);
}
}
Following up from #ctacke's response, which created an arc-shaped polygon for a circle (height == width), I edited it further and created a function for creating a Point array for a curved line, as opposed to a polygon, and for any ellipse.
Note: StartAngle here is NOON position, 90 degrees is the 3 o'clock position, so StartAngle=0 and SweepAngle=90 makes an arc from noon to 3 o'clock position.
The original DrawArc method has the 3 o'clock as 0 degrees, and 90 degrees is the 6 o'clock position. Just a note in replacing DrawArc with CreateArc followed by DrawLines with the resulting Point[] array.
I'd play with this further to change that, but why break something that's working?
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int ellipseWidth, int ellipseHeight, int xOffset, int yOffset)
{
if (PointsInArc < 0)
PointsInArc = 0;
if (PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc];
int xo;
int yo;
float degs;
double rads;
//could have WidthRadius and HeightRadius be parameters, but easier
// for maintenance to have the diameters sent in instead, matching closer
// to DrawEllipse and similar methods
double radiusW = (double)ellipseWidth / 2.0;
double radiusH = (double)ellipseHeight / 2.0;
for (int p = 0; p < PointsInArc; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)Math.Round(radiusW * Math.Sin(rads), 0);
yo = (int)Math.Round(radiusH * Math.Cos(rads), 0);
xo += (int)Math.Round(radiusW, 0) + xOffset;
yo = (int)Math.Round(radiusH, 0) - yo + yOffset;
points[p] = new Point(xo, yo);
}
return points;
}

Algorithm to create a "scruffy" paper effect for UML Diagrams?

I like the scruffy paper effect of http://yuml.me UML diagrams, is there an algorithm for that preferably not in Ruby but in PHP, java or C#, I would like to see if It's easy to do the same thing in Rebol:
http://reboltutorial.com/blog/easy-yuml-dialect-for-mere-mortals2/
The effect combines
a diagonal gradient fill
a drop shadow
lines which, rather than being straight, have some small apparently random deviations in them, which gives a 'scruffy' feel.
You can seed your random number generator with a hash of the input so you get the same image each time.
This seems to work OK for scruffing up lines:
public class ScruffyLines {
static final double WOBBLE_SIZE = 0.5;
static final double WOBBLE_INTERVAL = 16.0;
Random random;
ScruffyLines ( long seed ) {
random = new Random(seed);
}
public Point2D.Double[] scruffUpPolygon ( Point2D.Double[] polygon ) {
ArrayList<Point2D.Double> points = new ArrayList<Point2D.Double>();
Point2D.Double prev = polygon[0];
points.add ( prev ); // no wobble on first point
for ( int index = 1; index < polygon.length; ++index ) {
final Point2D.Double point = polygon[index];
final double dist = prev.distance ( point );
// interpolate between prev and current point if they are more
// than a certain distance apart, adding in extra points to make
// longer lines wobbly
if ( dist > WOBBLE_INTERVAL ) {
int stepCount = ( int ) Math.floor ( dist / WOBBLE_INTERVAL );
double step = dist / stepCount;
double x = prev.x;
double y = prev.y;
double dx = ( point.x - prev.x ) / stepCount;
double dy = ( point.y - prev.y ) / stepCount;
for ( int count = 1; count < stepCount; ++count ) {
x += dx;
y += dy;
points.add ( perturb ( x, y ) );
}
}
points.add ( perturb ( point.x, point.y ) );
prev = point;
}
return points.toArray ( new Point2D.Double[ points.size() ] );
}
Point2D.Double perturb ( double x, double y ) {
return new Point2D.Double (
x + random.nextGaussian() * WOBBLE_SIZE,
y + random.nextGaussian() * WOBBLE_SIZE );
}
}
example scruffed up rectangle http://img34.imageshack.us/img34/4743/screenshotgh.png

Resources