Extract rotated rectangle from raster in Python - geometry

I'm using GDAL in Python to work with GeoTIFF rasters. I use the following code to extract small rectangular patches from the entire raster:
data_file = gdal.Open("path/to/raster.tiff")
data = data_file.ReadAsArray(xoffset, yoffset, xsize, ysize)
How could I change this code to extract rotated rectangular areas from the raster. For example, I would like to be able to extract data from the area shown in red below.
I'd like the red area to be resampled and rotated, so that I can access it as a simple numpy data array.

I created a solution to this by following this excellent post about how to implement affine transforms.
My solution works by:
Using ReadAsArray to read a section of the full raster that fully contains the red area;
Identifying points p0, p1, p2 representing the top-left, top-right and bottom-left corners of the red area respectively in pixel coordinates;
Implementing the algorithm as described in the link to compute the affine transform, leaving me with the red area on its own, rotated into a horizontal position.

Related

Python:Contouring around the rectangle object in a image to obtain the corner points of the rectangle

I have an image which consists of an rectangular object in it and i want to find the 4 corners of the rectangle so that i can calculate the angle of inclination of that object to rotate the image based on that angle.I wanted to know if there are ways to identify the 4 corners of rectangular object so that i can wrap the image using the calculated angle.
I have tried doing some image processing stuff such as converting it gray scale and reducing the noise through Gaussian filter and after which i detect the edge using edge detection filter followed by thresholding and finding the contour.
The problem is that the contours that are found is not consistent and its not performing well on different images from my dataset .Also the background for each of these images is not constant it varies.
Try cv.findContours() on the binarized image, with white object on black background. Then run either cv.boundingRect() or cv.minAreaRect() on the contour.
See Tutorial here: https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html

Is labelling images with polygon better than square?

I aim to make an object detection model and I labelled data with a square box
If I label the images with polygon, will it be better than square?
(labelling on image of people wearing safety helmet or not)
I did try label with polygon shape on a few images and after export txt file for YOLO
why it has only 4 points in the text file as same as labelled with a square shape
how those points will represent an area that I label accurately?
1 0.573748 0.018953 0.045332 0.036101
1 0.944520 0.098375 0.108931 0.167870
You have labeled your object in a polygonial format, but when you had made a conversion to YOLO-format the information in the labelings has reduced. The picture below shows how I suppose has happend;
...where you have done polygon shape annotation (black shape). But, the conversion has "searched" the smallest x-value from the polygonial coordinate points and smallest y-value from corresponding polygonial coordinate points. And, those are the "first two" values of your YOLO-format. The same logic has happend with the "width" and "heigth" -parameters.
A good description about the idea behind the labelling and dataset is shown in https://www.youtube.com/watch?v=h6s61a_pqfM.
In short; for your purpose (for efficiency) I propose you make fast & convenient annotation using rectangles only - no time consuming polygon annotation.
The YOLO you are using very likely only has square annotation support.
See this video showing square vs polygon quality of results for detection, and the problem of annotation time required to create custom data sets.
To use polygonal masks can I suggest switching to use YOLOv3-Polygon or YOLOv5-Polygon

How can i create an image morpher inside a graphics shader?

Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...

Generating density map for tree growth rings

I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)

How Can I Detect Ellipses in OpenCV/JavaCV?

I am currently working on a program to detect coordinates of pool balls in an image of a pool table taken from an arbitrary point.
I first calculated the table corners and warped the perspective of the image to obtain a bird's eye view. Unfortunately, this made the spherical balls appear to be slightly elliptical as shown below.
In an attempt to detect the ellipses, I extracted all but the green felt area and used a Hough transform algorithm (HoughCircles) on the resulting image shown below. Unfortunately, none of the ellipses were detected (I can only assume because they are not circles).
Is there any better method of detecting the balls in this image? I am technically using JavaCV, but OpenCV solutions should be suitable. Thank you so much for reading.
The extracted BW image is good but it needs some morphological filters to eliminate noises then you can extract external contours of each object (by cvFindContours) and fit best ellipse to them (by cvFitEllipse2).

Resources