I am trying to draw a concave circle, e.g. something like a soup bowl to form the "pips" on a die in a Yahtzee like program I am working on using C#.
It seems to me that in graphics "concave" has a different, and quite complex, meaning to what I am trying to achieve - the opposite of convex.
I can draw quite a realistic ball using a PathGradientBrush with white at a position one third of the way across and one third down and SurroundColors of the ball colour and I had hoped by moving the white spot to two thirds across and down I might get what I want but sadly not.
Can anybody give me a steer please? Even a pointer to what might be better search term would be a good start.
Many thanks.
Related
I'm building a 2D game where player can only see things that are not blocked by other objects. Consider this example on how it looks now:
I've implemented raytracing algorithm for this and it seems to work just fine (I've reduced the boundaries for demo to make all edges visible).
As you can see, lighter area is built with a bunch of triangles, each of them having common point in the position of player. Each two neighbours have two common points.
However I'm willing to calculate bounds for external the part of the polygon to fill it with black-colored triangles "hiding" what player cannot see.
One way to do it is to "mask" the black rectangle with current polygon, but I'm afraid it's very ineffective.
Any ideas about an effective algorithm to achieve this?
Thanks!
A non-analytical, rough solution.
Cast rays with gradually increasing polar angle
Record when a ray first hits an object (and the point where it hits)
Keep going until it no longer hits the same object (and record where it previously hits)
Using the two recorded points, construct a trapezoid that extends to infinity (or wherever)
Caveats:
Doesn't work too well with concavities - need to include all points in-between as well. May need Delaunay triangulation etc... messy!
May need extra states to account for objects tucked in behind each other.
I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.
So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.
Crayon Physics Deluxe is a commercial game that came out recently. Watch the video on the main link to get an idea of what I'm talking about.
It allows you to draw shapes and have them react with proper physics. The goal is to move a ball to a star across the screen using contraptions and shapes you build.
While the game is basically a wrapper for the popular Box2D Physics Engine, it does have one feature that I'm curious about how it is implemented.
Its drawing looks very much like a Crayon. You can see the texture of the crayon and as it draws it varies in thickness and darkness just like an actual crayon drawing would look like.
(source: kloonigames.com)
(source: kloonigames.com)
The background texture is freely available here.
What kind of algorithm would be used to render those lines in a way that looks like a Crayon? Is it a simple texture applied with a random thickness and darkness or is there something more going on?
I remember reading (a long time ago) a short description of an algorithm to do so:
for the general form of the line, you split the segment in two at a random point, and move this point slightly away from it's position (the variation depending on the distance of the point to the extremity). Repeat recursively/randomly. In this way, you lines are not "perfect" (straight line)
for a given segment you can "overshoot" a little bit, by extending one extremity or the other (or both). In this way, you don't have perfect joints. If i remember well, the best was to extends the original extremities, but you can do this for the sub-segment if you want to visibly split them.
draw the lines with pattern/stamp
there was also the (already mentioned) possibility to drawn with different starting and ending opacity (to mimic the tendency to release the pen at the end of drawing)
You can use a different size for the stamp on the beginning and the end of the line (also to mimic the tendency to release the pen at the end of drawing). For the same effect, you can also draw the line twice, with a small variation for one of the extremity (be careful with the alpha in this case, as the line will be drawn twice)
Last, for a given line, you can do the previous modifications several times (ie draw the line twice, with different variations) : human tend to repeat a line if they make some mistakes.
Regards
If you blow the image up you can see a repeating stamp-pattern .. there's probably a small assortment that it uses as it moves from a to b - might even rotate them ..
The wavering of the line can't be all that difficult to do. Divide into a bunch of random segments, pick positions slightly away from the direct route and draw splines.
Here's a paper that uses a lot of math to simulate the deposition of wax on paper using a model of friction. But I think your best bet is to just use a repeating pattern, as another reader mentioned, and vary the opacity according to pressure.
For the imperfect line drawing parts, I have a blog entry describing how to do it using bezier curves.
You can base darkness on speed. That's just measuring the distance traveled by the cursor between this frame and the last frame (remember Pythagorean theorem) and then when you go to draw that line segment on screen, adjust the alpha (opacity) according to the distance you measured.
There is a paper available called Mimicking Hand Drawn Pencil Lines which covers a bit of what you are after. Although it doesn't present a very detailed view of the algorithm, the authors do cover the basics of the steps that they used.
This paper includes high level descriptions of how they generated the lines, as well as how they generated the textures for the lines, and they get results which are similar to what you want.
This article on rendering chart series to look like XKCD comics has an algorithm for perturbing lines which may be relevant. It doesn't cover calculating the texture of a crayon drawn line, but it does offer an approach to making a straight line look imperfect in a human-like way.
Example output:
I believe the easiest way would simply be to use a texture with random darkness (some gradients, maybe) throughout, and set size randomly.
I know this is more high school math(wow been a long time since I was there) but I am trying to solve this programatically so I am reaching out to the collective knowledge of stackoverflow
Given this layout:
Midpoint is my reference point and in an array I have the vector points of all other points (P)
I can get to this state with code of having the light blue area by breaking it into four quadrants and doing a lame bubble sort to find largest(y) or lowest(x) value in each quadrant.
I need to find only the quadrants that outer border fully hits red no white space. For example the lower left and the up right dont have any white space hitting the light blue rectangle.
I am sure my terminology is all off here and im not looking for any specific code but if someone could point me to a more optimized solution for this problem or the next step in what I already have.
Thank you
I might do some BFI solution first, then perhaps look to generalize it or at least reduce it to a table-drive loop.
So, if it's exactly these shapes, and not a general solution, I think you should proceed sort of like this:
Derive the coordinates of the blue rectangle. I suspect one thing that's confusing you is that you have each individual x and y for the blue rect but you can't easily loop through them.
Derive the coordinates of the midpoint of each rectangle edge. You are going to need this because you care about quadrants. It will be trivial to do this once you have done 1.
Write different code for each 1/2 rectangle edge. There is no doubt a more clever way but this will get working code.
Make it more elegant now if you care. I betg you can reduce the rules to an 8-row
table full of things like 1, -1, or something like that.
First, you can't define red area by a single vector, since it's disjoint. You need the same number of vectors as the number of distant red regions.
Second, do we assume that different red figures neither intersect nor share a border? In the next clause I do.
Third, under assumption in point 2, the quadrant will have all red outer side iff there exists a contiguous red figure that intersects both its axes (i.e. rays). To determine this for all quadrants, you should only traverse all (P) points in the order they're given. This takes linear time and solves the problem.