I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.
Related
I am a beginner at graphics and I was wondering if anyone had any experience in programmatically splitting isometric tile sheets, in particular Reiner Tile Sheets Here is an Example Image:
.
I have been splitting it using guides by hand in gimp but there is some sort of pattern going on that I feel can be used to programmatically split this. Before I tried to make my own, I wanted to see if there was any such algorithms premade / software that could do it currently. Its not a simple grid that needs to be cut with same width and height for each one. Thanks for the help!
Some stuff for thinking and read
First take a look at:
2D Diamond (isometric) map editor - Textures extended infinitely?
for some inspiration. Especially take a look at (3. tile editor) part. The operations described there are exactly what you are looking for (to add the missing stuff you are doing manually right now).
However your tile set is oriented differently so the masks will be slightly different ...
In case you want to extract tileset from image you would need something like this:
Grid image values to 2D array
And also take a look at this (for even more inspiration):
Improving performance of click detection on a staggered column isometric grid
The pixel perfect O(1) mouse selection at the end is a good idea to implement.
Your tile map
so you have a tilemap image but you do not have the tiles boundaries. So first identify tileset resolution... There might be more tile sizes present so you need to know all of them. Your image is 256x1024 pixels and from a quick look you have 32x32 pixels tiles. Most of the tiles are 64x64 however they are constructed from 4 tiles of 32x32 pixels. The white color is the transparent one. So you just divide the image to 32x32 squares or regroup to 64x64 ones.
I am trying to extract text from an image, but within a certain area of the image and not the entire image.
I have already been able to detect where the objects of interest are and get their coordinates. Though I do not know where to start when extracting text from a specific area.
I'm using the code from this example:
https://www.codingame.com/playgrounds/38470/how-to-detect-circles-in-images
It is able to detect the circles, but I want to take it one step further and extract the numbers from the circles and tag them to their corresponding coordinate.
I'm using this example to learn how to do something similar myself, but I'm really more interested in deciding the search in a set area.
Most image processing libraries support the concept of ROIs (region of interest) or AOIs (area of interest).
The idea is to restrict processing to a subset of pixels that are usually selected by defining geometric shapes like rectangles, polygons, circles within the image coordinate system.
You can fix this issue by first cropping the image using your coordinates and try to extract text from it.
enter image description here
My goal is to take the image above and "open" it along the center so that the 9 black doublets are in a straight line rather than in a circle. I have tried using the cv2.toPolar() function in OpenCV but the image is quite distorted, as can be seen below:
enter image description here
I am attempting to try a different approach now. From the center, I would like to access each of the doublet individually, like a pizza slice, and place them side by side
Initially I was thinking of slicing each doublet using two lines from the center of the image to the mid point between the doublets on either side.
My question is: how can I draw contours from the center of the image to the edge of the image, passing through the mid point between any two doublet. If I can draw one, I know that the angle between any two such consecutive contour is 40 degrees.
Any help is greatly appreciated!
I noted a few problems here:
The toPolar() conversion might have been around the center of the image file, but it is not the center of the object. This causes part of the distortion. If you share your code, I could try playing with the code and improving it.
2.The object is somewhat elliptical, not circular. This means you will still have a wave after correcting the above problem.
If you don't mind a semi-automatic solution, you could use OpenCV mouse events to specify the first line and let the program use the 40 degree angle to calculate the rest.
I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...
I am currently working on a program to detect coordinates of pool balls in an image of a pool table taken from an arbitrary point.
I first calculated the table corners and warped the perspective of the image to obtain a bird's eye view. Unfortunately, this made the spherical balls appear to be slightly elliptical as shown below.
In an attempt to detect the ellipses, I extracted all but the green felt area and used a Hough transform algorithm (HoughCircles) on the resulting image shown below. Unfortunately, none of the ellipses were detected (I can only assume because they are not circles).
Is there any better method of detecting the balls in this image? I am technically using JavaCV, but OpenCV solutions should be suitable. Thank you so much for reading.
The extracted BW image is good but it needs some morphological filters to eliminate noises then you can extract external contours of each object (by cvFindContours) and fit best ellipse to them (by cvFitEllipse2).