Canvas for Rust's Amethyst? - rust

Is there any possibility to draw primitives on some kind of canvas in Amethyst game engine? Let's say I want to draw pixel points, lines, circles etc? I did not find anything straightforward for that, though I admit that my task can be done with a simple texture of black and white pixel, that can be treated as one-pixel sprites. But that doesn't seem to be a right solution.

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

Is there an equivalent of soft pen in GDI+?

I need to draw a soft wide outline for my GDI+ GraphicsPath.
Something like this:
A path edge is shown in red. I'd like to use a wide pen which is smooth. I also need an ability to control smoothness of the pen.
I tried to use a gradient brush with the pen but couldn't find a solution that works.
I can achieve the desired result by drawing an outline with a black solid pen and applying a Gaussian smoothing filter on top of the result image, but I want to avoid this because it's slow when I have to process the whole image which could be quite large.
Is there a way to draw a smooth path outline?
There is no standard way in GDI+ that provides this functionality so you will have to create it.
You could track the line segments and draw a fuzzy, filled circle along the segments. By drawling the fuzzy circle once to a bitmap it should be fairly easy and fast to blit it continuously. By blending it slowly over time to the canvas you can also create a very nice effect and it would allow the user to control the intensity and maybe the size of the circle.

rounded edges/corners in DirectX (D3D9)

I created my own little 2D-Engine with DirectX (okey, should be more like a GUI in the end) and tried to create rounded edges for a simple Rectangle. Since I never done this with a graphics framework before I had no idea how to supply this.
For now, I just overlapped 5 Rectangles and 4 circles (the circles are used for the rounded edges). It does work with opaque colors but if I add alpha into the rectangles the circles are making problems. (Shown in the image below - i should have choose another colors...)
<# Open Image #>
I can't find a solution myself (I googled and whondered I found nothing about rounded edges in DirectX) and I do believe there is a much powerful and faster method doing this. So my final question is, what are the common algorythm to create a rectangle with rounded edges in Direct3D9 ?
The common way to draw rounded quads is the use of textures with an alphachannel. It's very easy and the most of the gui's uses images to achieve a specific look. If you draw only single-colored boxes it may look very generic after a while (even if they have fancy rounded corners ;) ).
But if you want to draw rounded quads directly, I would suppose to generate a custom geometry, which fits the desired area directly without overlapping (need for alphablending). In you case it would be something like this:
The more triangles you're using for the corner the smoother it will look.

In 3D graphics, why is antialiasing not more often achieved using textures?

Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.

How to implement an eraser tool in a simple drawing app?

I have a prototype of a simple drawing application. When the user drags a finger across the screen, I record the points along the way and draw a series of lines between them. In other words, a drawing is a list of “paths” and each path is a list of points to connect. This is easy, it works and it’s efficient.
The problem is I’d like to implement an eraser tool. In a regular bitmap editor the eraser simply erases pixels, but in my drawing there are no pixels to erase – all pixels are created dynamically by stroking the paths. I could do a simple eraser by “drawing” using the background colour, overlaying the already painted paths. But I’d like to draw on a textured background, so that’s a no-go.
How would you do this? (Short of the obvious solution of representing the drawing as a bitmap where the eraser is simple.)
You can't implement an eraser in the traditional sense; what you describe with recording the paths and drawing them dynamically is vector graphics. The concept of an eraser comes from raster graphics (a bitmap, basically). With vector graphics, the user generally selects an item or an area of items to delete.
If you really wanted to do this, you'd basically have to do collision detection between all of the paths in your graphic and the rectangle (or whatever shape) of the eraser. When contact occurs, you'd have to cut the colliding graphic object on either side of the eraser by using the slope of the line(s) in contact with the eraser and the point of intersection.
You could probably find the intersections of your existing paths and the deleted area, split the existing paths up, and create new points at the intersections (which would become start/end points of the newly split paths).
I could do a simple eraser by
“drawing” using the background colour,
overlaying the already painted paths.
But I’d like to draw on a textured
background, so that’s a no-go.
Can't you do an "eraser by drawing" except you don't use a single color but the whole background as color. I mean, for a given path to erase, you take one by one each pixel and color it with the background color of the same pixel cordinates

Resources