Differentiate table line from big letters - graphics

I'm doing some graphics processing and I have a logic where in I have a bitmap with edges and I disregard all table edges from the letters E.g.
0000000000
0111111110
0100000010
0102220010
0100200010
0100200010
0100000010
0111111110
0000000000
0 - background color
1 - ignored edges
2 - edges I need
My logic is just simple, if a number of continuous pixels exceeds a certain threshold, e.g. 20pixels of continuous edges, it will consider it as a line and disregard it.
My problem is that on big font size and letters such as H and T, it will definitely exceed the threshold. Please advise is there a better way or additional logic i need to implement in order to separate table lines from letters.
[update] Additional consideration: Performance, this logic will be used during touch movement (dragging). It will be called a lot of times so it needs to be fast.

If table lines are guaranteed to be thin, then ignore thick lines. However, if the lines in your application are generated by edge detection (which are always 1-pixel thin) then connected-component will be needed.
Basically, the "thickness" refers to thickness measured from an edge profile:
00000000100000000 This line has thickness 1
00000011111000000 This line has thickness 5. However, this cannot occur in the output of edge detection, because edge detection algorithms are specifically designed to remove this condition.
00000000111111111 This is a transition from black to white.
Table lines usually have small thickness. Large fonts usually have transition from black to white because their thickness is larger than the edge profile window.

Related

Segmenting License Plate Characters #ImageProcessing

For a university project I have to segment characters from a license plate using Python. This sounds reasonably simple. However, the thing is that we are not allowed to use any sophisticated library functions such as cv2.findContours(). The basics such as cv2.imread() cv2.resize() cv2.rectangle() are allowed.
I have written a function that localizes a license plate in an image and outputs a result as can be seen in the images Output 1 and Output 2 . These are binary images.
As one can see. Sometimes, the output of this function is relatively clean (Output 2). However, often it is also noisy (Output 1)
For a clean image (Output 2) I have tried finding the columns that contain less than x black pixels in order to segment the characters. However, this only works when the image is clean. This is often not the case. Changing the x parameter here does not make significant improvement.
Does anybody have suggestions on how I can approach this problem?
For an elementary solution, you can form a profile by counting the black pixels on all vertical lines. Then look for maximas and minimas of the average count in a sliding interval on this profile. The interval length should be a fraction of the expected width of a character. Only the extrema with sufficient contrast should be considered.
To avoid the effect of surrounding features in rotated plates, you can restrict the counting to just a slice of the image.
Once you have approximate vertical limits between the characters, you can repeat a similar processing to get the bottom and top limits of the characters (the sliding interval is no more necessary).
Finally, you can refine the boxing by finding the horizontal limits in the rectangles so formed.

Create offset line contours from a line set with arbitrary topology

Example image:
Given a set of connected lines (see thick black lines in example image), how can you generate a set of offset contour lines that form loops (see thin blue lines)? The offset is constant across all lines, and the contours are always parallel to its associated lines.
The input line topology is arbitrary: i.e. it may contain cycles. Note that the number of contour loops is equal to the number of cycles plus one. A solution that just deals with tree topologies only (no cycles) could also be of interest.
Any papers or relevant algorithms out there that tackle this problem?
The basic method is to construct the bissectrix of the angles (on the right side) and draw on it a length such that it achieves the desired offset (a little of trigonometry). And to link them in the loop traversal order. Different capping rules can be used at free endpoints.
For this to be possible, you need a representation of the geometry as a planar graph (quad-edge for instance). Maybe have a look here: https://mathoverflow.net/q/23811.
Anyway, this method will not avoid the overlaps that can arise, nor self-intersecting offsets. These are much more difficult problems that require a global approach, and are similar to the polygon union problem.

Preventing pixelshader overdraw for a single ERG

Background
Using gluTess to build a triangle list in Direct3D9 from a GDI+ DrawString(..) path:
A pixel shader (v3.0) is then used to fill in the shape. When painting with opaque values, everything looks fine:
The problem
At certain font sizes, if the color has an alpha component (ie Argb #55FFFFFF) we begin to see these nasty tessellation artifacts where triangles may overlap ever so slightly:
At larger font sizes the problem is sometimes not present:
Using Intel's excellent GPA Frame Analyzer Pixel History tool, we can see in areas where the artifacts occur, the pixel has been "touched" 3 times from the single Erg.
I'm trying to figure out how I can stop my pixel shader from touching the same pixel more than once.
Other solutions relating to overdraw prevention seem to be all about zbuffer strategies, however this problem is more to do with painting of a single 2D triangle list within a single pixel shader pass.
I'm at a bit of a loss trying to come up with a solution on this one. I was hoping that HLSL might have some sort of "touch each pixel only once" flag, but I've been unable to find anything like that. The closest I've found was to set the BLENDOP to MAX instead of ADD. But the output is not correct when blending over other colors in the scene.
I also have SRCBLEND = ONE, DSTBLEND = INVSRCALPHA. The only combination of flags which produce correct output (albeit with overdraw artifacts.)
I have played with SEPARATEALPHABLENDENABLE in the GPA frame analyzer, which sounded like almost exactly what I need here -- set blending to MAX but only on the "alpha" channel, however from what I can determine, that setting (and corresponding BLENDOPALPHA) affects nothing at all.
One final thing I thought of was to bake text as opaque onto a texture, and then repaint that texture into the scene with the appropriate alpha value applied, however this doesn't actually work in this project because I also support gradient brushes, where stop values may contain alpha, meaning either the artifacts would still be seen, or the final output just plain wrong if we stripped the alpha away from the stop values prior to baking to a texture. Also the whole endeavor would be hideously expensive.
Any hints or pointers would be appreciated. Thanks for reading.
The problem you're seeing shouldn't happen.
If two of your triangles are overlapping it's because you've placed the vertices in such a way that when the adjacent triangles are drawn, they overlap. What's probably happening is that these two adjacent triangles share two vertices, but each triangle has its own copy of each vertex that's been calculated to be in a very, very slightly different position.
The solution to the problem isn't to try and make the pixel shader touch the pixel only once it's to use an index buffer (if you aren't already) and have the shared vertices between each triangle actually share the same vertex and not use one that's ever-so-slightly not in the same place as the one used by the adjacent triangle.
If you aren't in control of the tessellation algorithm being used you may have to run a pass over the vertex buffer after its been generated to detect and merge vertices that are within some very small tolerance of one another. Even without an index buffer, a naive solution would be this:
For each vertex in the vertex buffer, compare its position to every other vertex in the rest of the vertex buffer.
If two vertices are within some small tolerance of another, replace the second vertex's position with the position of the one you are comparing it against.
This should have the effect of pairing up the positions of two vertices if they are close enough that you deem them to be the same.
You now shouldn't have any problem with overlapping triangles. In everyday rendering two triangles share edges with each other all the time and you won't ever get the effect where they appear to every-so-slightly overlap. The hardware guarantees that a sample point is either on one side of the line or the other, but never both at the same time, no matter how close the point is to the line (even if it's mathematically on the line, it still fails on one side or the other).

Prominent lines not detected by Hough Transform

After running Canny edge detector on an image i'm getting clear lines. But the Hough line function seems to be missing out on pretty prominent lines when run on the Canny edgemap image.
I'm keeping only vertical and horizontal Hough lines (a tolerance of 15 degrees). Lots of extra lines are coming up but clearly visible lines bounding the rectangles are not being picked up.
Here's the snippet:
cvCanny( img, canny, 0, 100, 3 );
lines = cvHoughLines2( canny, storage, CV_HOUGH_PROBABILISTIC, 1, CV_PI/180, 35, 20, 10 );
The main intention is to detect the rectangular boxes that denote the nodes of the linked list. However the squares.c sample program will detect only perfect rectangles, not if an arrowhead is touching the rectangle boundary.
Could you please explain the sort of changes to Hough line function which will help me get hough lines corresponding to clearly visible lines in Canny edge image?
(Added: a preprocessing step, suggested by shernshiou.)
Preprocessing steps:
Thresholding the image,
Use connected-component
From the connected-component results, detect and remove the small objects - the sets of four-digits below and in the middle of each box.
(Remark. The thresholding step is simply a preprocessing step required by connected-component.)
If you want to detect only perfectly horizontal and vertical lines, my suggestion is to perform horizontal and vertical edge enhancement (via convolution) before Hough transform.
This will make the true lines more likely to "peak" in the Hough-projection, and increases the chance of the line being picked up by OpenCV.
The steps would be:
Compute Canny edge image from input
Apply horizontal Sobel filtering on Canny edge image
Apply Hough line detection on horizontally-enhanced edge image.
Apply vertical Sobel filtering on Canny edge image. (Note: use step 1's result, not step 2's)
Apply Hough line detection on vertically-enhanced edge image.
Combine the horizontal and vertical lines and present the result.
You did read the documentation did you?
I have a few options for you:
The lines you miss (most notably the leftmost vertical line on the rightmost box in the image) are rather short. Try lowering the threshold (5th input variable of cvHoughLines2). This threshold is just the number of pixels that must lie on the line. From the image I'd guess that there are indeed less than 35 pixels on the lines you miss.
The 6th input variable indicates the minimum line length. I assume this is in pixels, so with the 5th parameter you require 35 pixels on the line, yet you search for lines 20 pixels or longer. The way you set this variable it is non-functional. Lower the 5th variable, raise this one if you are finding to many useless short lines.
Lower the 7th parameter to disallow large gaps in your lines. This will eliminate some of the slanted lines.
In short, try it again with different values for parameters 5,6 and 7.
I'd try some lower values of parameters 5 and 7, and a similar or slightly higher value for 6. Because of 2 above 5 should always be lower than or equal to 6 to have an effect. 7 should at least equal the difference between 5 and 6 if 5 is lower.
Normally people do not use hough line straight away from the box. The normal practice involves pre-processing image (e.g. change the luminance, change the colour, sharpen the image...).

How to produce Photoshop stroke effect?

I'm looking for a way to programmatically recreate the following effect:
Give an input image:
input http://www.shiny.co.il/shooshx/ConeCarv/q_input.png
I want to iteratively apply the "stroke" effect.
The first step looks like this:
step 1 http://www.shiny.co.il/shooshx/ConeCarv/q_step1.png
The second step like this:
alt text http://www.shiny.co.il/shooshx/ConeCarv/q_step2.png
And so on.
I assume this will involves some kind of edge detection and then tracing the edge somehow.
Is there a known algorithm to do this in an efficient and robust way?
Basically, a custom algorithm would be, according to this thread:
Take the 3x3 neighborhood around a pixel, threshold the alpha channel, and then see if any of the 8 pixels around the pixel has a different alpha value from it. If so paint a
circle of a given radius with center at the pixel. To do inside/outside, modulate by the thresholded alpha channel (negate to do the other side). You'll have to threshold a larger neighborhood if the circle radius is larger than a pixel (which it probably is).
This is implemented using gray-scale morphological operations. This is also the same technique used to expand/contract selections. Basically, to stroke the center of a selection (or an alpha channel), what one would do is to first make two separate copies of the selection. The first selection would be expanded by the radius of the stroke, whereas the second would be contracted. The opacity of the stroke would then be obtained by subtracting the second selection from the first.
In order to do inside and outside strokes you would contract/expand by twice the radius and subtract the parts that intersect with the original selection.
It should be noted that the most general morphological algorithm requires O(m*n) operations, where m is the number of pixels of the image and n is the number of elements in the "structuring element". However, for certain special cases, this can be optimized to O(m) operations (e.g. if the structuring element is a rectangle or a diamond).

Resources