How to do pixel art in processing via scaling - graphics

I'd rather not recreate every draw function to use rectangles, as this seems unnecessary. I may end up using more complex draw functions like arc() and porting it to use rectangles seems like a waste of time.
Using scale() works ok, but it renders the result of the point() function as a circle. I tried putting noSmooth() in the setup() but it didn't seem to have any effect.
Ideally I'd like to have a grid of 64 X 48 pixels rendered at 640 X 480 so that each pixel is ten times as large.

You wouldn't have to use rect() for functions like arc(). You'd just use rect() instead of point().
But if it really bothers you that much, consider this approach:
Draw everything to a 64x48 PGraphics buffer using the standard scale. This would allow you to use the point() function.
Then scale that PGraphics instance using the scale() or image() functions to draw it to the screen.
More info is available in the reference.

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

How is the GPU "instructed" to render an image?

If this question is off, please let me know as I don't want to clutter the platform with off-topic questions!
Anyways, I'm having a hard time finding information about what's actually going on when an image is rendered because of some code I've written.
Say I wanted to add the numbers 5 and 3. The CPU would write 5 to one register and 3 to another one. The ALU would take care of the calculation and output 8. That's fine, the CPU uses MOVE and ADD to produce a result.
What I don't find any information on however, is what's going on when I want to draw a rectangle. There are importable frameworks for most programming languages which lets you do this. In SpriteKit (Swift & Objc) for example, you would write something like
let node = SKSpriteNode(color: .white, size: CGSize(width: 200, height: 300))
and add node to an SKScene (just a scene containing childNodes) and a white rectangle would "magically" get rendered. What I would like to know is what goes on under the hood. Why does this exact framework let you draw a rectangle. What is the assembly code (say, for Intel Core M) which makes the GPU calculate what this rectangle will look like? And how does SpriteKit build on the basics of Swift/Objective C to actually do this (and could I do this myself)?
Maybe a weird question, but I feel like I have to know (yes, sometimes I'm too curious). Thank you.
P.S. I would love a really detailed answer, not "the CPU 'tells' the GPU to draw a rectangle" - CPUs can't talk!
There are many ways to render convex polygon. The most used in past was ScanLine algorithm where you simply rasterize all the lines of circumference into left/right buffers and then just render using horizontal lines and interpolating the other coordinates along the way (like z,r,g,b,tx,ty,nx,ny,nz...). This was suited for single-thread CPU based SW rendering.
With parallelization (like on GPU) different approach get more popular. It simply renders only triangles (so you need to triangulate your polygons) and renders like this:
compute AABB
so simply min,max of x,y coordinates of the triangle vertexes.
loop through AABB
this is done in parallel and its done by GPU interpolators. Each interpolated (looped) "pixel" is called fragment (as it usually contains more than just color)
for each fragment
compute barycentric coordinates and from the result decide if fragment is inside (s+t<=1) or outside (s+t>1) triangle. If inside invoke Fragment shader.
All this gets done just before Fragment shader stage and usually all this (or majority of it) is implemented in HW so no code.
Nowadays GPU rendering is done by passing geometry to the gfx driver itself. What drivers does under the hood is just guess work for us but most likely they also just pass the geometry and configuration setting to the right places on the GPU (memory, registers, ...).

rounded edges/corners in DirectX (D3D9)

I created my own little 2D-Engine with DirectX (okey, should be more like a GUI in the end) and tried to create rounded edges for a simple Rectangle. Since I never done this with a graphics framework before I had no idea how to supply this.
For now, I just overlapped 5 Rectangles and 4 circles (the circles are used for the rounded edges). It does work with opaque colors but if I add alpha into the rectangles the circles are making problems. (Shown in the image below - i should have choose another colors...)
<# Open Image #>
I can't find a solution myself (I googled and whondered I found nothing about rounded edges in DirectX) and I do believe there is a much powerful and faster method doing this. So my final question is, what are the common algorythm to create a rectangle with rounded edges in Direct3D9 ?
The common way to draw rounded quads is the use of textures with an alphachannel. It's very easy and the most of the gui's uses images to achieve a specific look. If you draw only single-colored boxes it may look very generic after a while (even if they have fancy rounded corners ;) ).
But if you want to draw rounded quads directly, I would suppose to generate a custom geometry, which fits the desired area directly without overlapping (need for alphablending). In you case it would be something like this:
The more triangles you're using for the corner the smoother it will look.

Turning off antialiasing in Löve2D

I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions.
love.graphics.draw( sprite, x, y )
So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions.
Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed.
Is there some command to deactivate the antialiasing I can call at program startup?
If you turn off anti-aliasing you will just get aliasing, hence the name! Why are you drawing at non-integral positions, and what do you want it to do about those fractional parts? (Round them to the nearest value? Truncate them? What about if they're negative?)
Personally I would leave the low level graphics alone and alter your code to use accessors for x and y that perform the rounding or truncation that you require. This guarantees your pixel art ends up drawn on integer boundaries while keeping the anti-aliasing on that you might need later.
Another possible work around may be to use math.floor() to round your integers as a cheap workaround.
In case anyone is interested, I've been asking in other places and found out that what I am asking is already requested as feature: http://love2d.org/forum/tracker.php?p=2&t=7
So, the current version of Löve that I'm using (0.5.0) still doesn't allow to disable the antialias filter, but the feature is already in the SVN version of the engine.
you can turn off anti-aliasing by adding love.graphics.setDefaultFilter("nearest", "nearest", 1) to love.load()

Antialiased composition by coverage?

Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.

Resources