Avoiding lines between adjecent svg rectangles - svg

Although there's some standardized options for hinting the browser about anti-aliasing in svg, none of them seems to work for my case where I have rectangles with rounded corners - and therefore can't afford turning off anti-aliasing.
Although my rectangles are sized to leave no vertical spaces between them, a thin line shows between them, due to the effects of anti-aliasing. E.g. my svg has one rectangle end at pixel 80 and the next one starts at 81, but still they get a thin background line show between them.
There's no way to force latest version browsers to avoid anti-aliasing for straight lines (crispEdges doesn't force that for my rounded rectangles).
I read some about tweaking by adding 0.5 of a pixel to the y values and about tweaking only even or only odd y values (I believe this is related to the fact that most contemporary LCD screens comprise two hardware vertical pixels per software exposed pixel). I am unsure how precisely this mitigates the problem, and would like to get a definite account of why exactly this makes sense and what is the most correct/solid tweaking approach.

two hardware vertical pixels per software exposed pixel
No that's wrong.
When you specify a coordinate like "81" in an SVG, that coordinate falls on the imaginary line between pixel 80 and 81. If your line has width "1", then the renderer will attempt to draw that by placing 50% of the colour on the 80 pixel and 50% on the 81 pixel. This is anti-aliasing. If you want the one pixel line to not be anti-aliased like that, give it coordinate 81.5. That way the whole line will fall within pixel 81.
However if your line had width 2 (or any other even width) you should not use 81.5 but stay with 81. Because it will render 50% (ie. 1) in pixel 80 and 50% (1) in pixel 81. Resulting in no anti-aliasing effect.
This applies for both horizontal and vertical lines. And applies whether you are on an LCD or old CRT.
Does this explanation make sense now?

Related

Interaction of 3d objects in 4d space in Unity

I have an idea for the realization of Hogwarts fic "Harry Potter and the Methods of Rationality". This is not explicitly mentioned, but by describing the possibility of imposing space and the absence of a visible difference in the direction and length between the leads leading to different places, one can understand that Hogwarts is in a pseudo-4D space created using magic. But I came across a problem: I do not know how to make interaction between 3D objects in Unity, but in 4D space.
The figure depicts 2D characters in a pseudo 3D space (This is 2D layers), when they go into one of the passes they do not notice the height changes the way, because they see in 2D. If one of the characters looks from a height 0.5 to a height 1 or 0, he will see what is there, because the difference in height is not enough, but if he looks from a height 0 to a height 1, he can't see what is there (too high). The same thing happens in the pseudo-4D space of Hogwarts, but only the corridors do not change the height, but change the w coordinate, and as we are 3D creatures, we will not see this change.
You can see image there:
The height in the image can vary from 0 to one, it is displayed with color from black to white.

How to draw shapes in the proper order when rendering?

I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.

Prominent lines not detected by Hough Transform

After running Canny edge detector on an image i'm getting clear lines. But the Hough line function seems to be missing out on pretty prominent lines when run on the Canny edgemap image.
I'm keeping only vertical and horizontal Hough lines (a tolerance of 15 degrees). Lots of extra lines are coming up but clearly visible lines bounding the rectangles are not being picked up.
Here's the snippet:
cvCanny( img, canny, 0, 100, 3 );
lines = cvHoughLines2( canny, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 35, 20, 10 );
The main intention is to detect the rectangular boxes that denote the nodes of the linked list. However the squares.c sample program will detect only perfect rectangles, not if an arrowhead is touching the rectangle boundary.
Could you please explain the sort of changes to Hough line function which will help me get hough lines corresponding to clearly visible lines in Canny edge image?
(Added: a preprocessing step, suggested by shernshiou.)
Preprocessing steps:
Thresholding the image,
Use connected-component
From the connected-component results, detect and remove the small objects - the sets of four-digits below and in the middle of each box.
(Remark. The thresholding step is simply a preprocessing step required by connected-component.)
If you want to detect only perfectly horizontal and vertical lines, my suggestion is to perform horizontal and vertical edge enhancement (via convolution) before Hough transform.
This will make the true lines more likely to "peak" in the Hough-projection, and increases the chance of the line being picked up by OpenCV.
The steps would be:
Compute Canny edge image from input
Apply horizontal Sobel filtering on Canny edge image
Apply Hough line detection on horizontally-enhanced edge image.
Apply vertical Sobel filtering on Canny edge image. (Note: use step 1's result, not step 2's)
Apply Hough line detection on vertically-enhanced edge image.
Combine the horizontal and vertical lines and present the result.
You did read the documentation did you?
I have a few options for you:
The lines you miss (most notably the leftmost vertical line on the rightmost box in the image) are rather short. Try lowering the threshold (5th input variable of cvHoughLines2). This threshold is just the number of pixels that must lie on the line. From the image I'd guess that there are indeed less than 35 pixels on the lines you miss.
The 6th input variable indicates the minimum line length. I assume this is in pixels, so with the 5th parameter you require 35 pixels on the line, yet you search for lines 20 pixels or longer. The way you set this variable it is non-functional. Lower the 5th variable, raise this one if you are finding to many useless short lines.
Lower the 7th parameter to disallow large gaps in your lines. This will eliminate some of the slanted lines.
In short, try it again with different values for parameters 5,6 and 7.
I'd try some lower values of parameters 5 and 7, and a similar or slightly higher value for 6. Because of 2 above 5 should always be lower than or equal to 6 to have an effect. 7 should at least equal the difference between 5 and 6 if 5 is lower.
Normally people do not use hough line straight away from the box. The normal practice involves pre-processing image (e.g. change the luminance, change the colour, sharpen the image...).

Differentiate table line from big letters

I'm doing some graphics processing and I have a logic where in I have a bitmap with edges and I disregard all table edges from the letters E.g.
0000000000
0111111110
0100000010
0102220010
0100200010
0100200010
0100000010
0111111110
0000000000
0 - background color
1 - ignored edges
2 - edges I need
My logic is just simple, if a number of continuous pixels exceeds a certain threshold, e.g. 20pixels of continuous edges, it will consider it as a line and disregard it.
My problem is that on big font size and letters such as H and T, it will definitely exceed the threshold. Please advise is there a better way or additional logic i need to implement in order to separate table lines from letters.
[update] Additional consideration: Performance, this logic will be used during touch movement (dragging). It will be called a lot of times so it needs to be fast.
If table lines are guaranteed to be thin, then ignore thick lines. However, if the lines in your application are generated by edge detection (which are always 1-pixel thin) then connected-component will be needed.
Basically, the "thickness" refers to thickness measured from an edge profile:
00000000100000000 This line has thickness 1
00000011111000000 This line has thickness 5. However, this cannot occur in the output of edge detection, because edge detection algorithms are specifically designed to remove this condition.
00000000111111111 This is a transition from black to white.
Table lines usually have small thickness. Large fonts usually have transition from black to white because their thickness is larger than the edge profile window.

Antialiased composition by coverage?

Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.

Resources