please , see following image, here you can see blue rectangle is custom shape bounds and custom shape is shoe , i want to find area of a portion written in image and i want that area in form of rectangle
do is there any path iterator concept ?
Note
custom shape i derived from image of the same size.
I would do it like this:
1.create table for all bounding box-rect perimeter lines
each value in it will represent the empty space length form border line to shape
something like this:
the values are found by simple image scanning until first non space color found
2.now bruteforce find the biggest rectangle area
x,y = top left corner
for xs = 1 to bounding box width
now scan the max valid height of rectangle from x to x + xs (x grows to the right)
// it should be the min y0[x..x+xs]
remember the biggest valid area/size combination
do this for all 4 combinations (star from the other corners)
I now Brute-force is slow but
you can divide perimeter lines not by pixels but with some step instead
also I am sure this can be optimized somehow
for example by derivation of perimeter find the extremes and check from them backwards
when the size will start shrinking then stop ...
of course take in mind that on complicated shapes this optimization will not work ...
Related
c = canvas.Canvas('data.pdf', pagesize= [width*inch,height*inch])
c.drawImage('dataptah',x,y, width,height)
c.save()
I can't height center the picture,
so I need know x and y unit,
or put something.
First of all, you can use units for the x, y, width, and height values in the drawImage call, just as you did for the pagesize. Thus, if you know the aspect ratio of your image, you can calculate these values for the exact centered position.
The reference documentation mentions two other parameters, that could be helpful:
preserveAspectRation=True keeps the aspect ratio of the image even if the box specified with x, y, width, height has a different aspect ratio.
anchor='c' specifies the anchor position of the image, center in this case.
Thus, if you add these two parameters and center the box on the page, then your image should appear centered as well. Here is an example:
c.drawImage('dataptah',
width/4*inch, height/4*inch,
width/2*inch, height/2*inch,
preserveAspectRatio=True, anchor='c'
)
I am trying various visualizations for an Igraph in R (version.3.3.1).
Currently my visualizing is as shown as below, 2 nodes (blue and green) in circular layout.
Circular Layout
visNetwork(data$nodes,data$edges) %>% visIgraphLayout(layout="layout_in_circle")
Now I want to have a semicircle structure instead of a full circle as in the pic. All blue nodes form a semicircle, green nodes another semicircle. Each semicircle separated by a small distance as well. How can i achieve this. I found grid package has an option for semicircle, but i couldnt make it work with igraph. Please provide some pointers.
The layout argument accepts an arbitrary matrix with two columns and N rows if your graph has N vertices; all you need to do is to create a list of coordinates that correspond to a semicircle. You can make use of the fact that a vertex at angle alpha around a circle with radius r centered at (0, 0) is to be found at (r * cos(alpha), r * sin(alpha)). Since you are using R, alpha should be specified in radians, spaced evenly between 0 and pi (which corresponds to 180 degrees).
I'm using a CIPerspectiveTransformWithExtent filter to apply homographies (perspective warp) to images on OS X. So far, so good, and I can get the desired warping applied to my images.
I'm still struggling however with the border behavior. I would like the output of the filter to be clipped outside the original image domain:
I managed to do it for the lower and left borders by shifting the origin of the inputExtent rectangle of the correct amount. For example, if the lower left corner is projected to x = -10 then using extent.origin.x = 10 will correctly clip the left border;
on the other hand, the upper and right borders are always shown in the output image. For example, if the rightmost corner is projected to x = width + 10, setting the extent via extent.origin.x = 0, extent.size.width = width; does not work an the rightmost corner remains visible.
Am I doing anything wrong here? or maybe I'm not trying the right way to achieve my goal?
I have a weird issue in my bar graph realized using d3.js: the 1 px padding between each rectangle appears irregular. I gather either or both the width or x position are the culprit but i don't understand what i'm doing wrong: the width is a fraction of the svg area and the X position is obtained via a D3 scale.
I've put a demo here: http://jsfiddle.net/pixeline/j679N/4/
The code ( a scale) controling the x position:
var xScale = d3.time.scale().domain([minDate, maxDate]).rangeRound([padding, w - padding]);
The code controlling the width:
var barWidth = Math.floor((w/dataset.length))-barPadding;
Thank you for your insight.
It's irregular because you are rounding your output range (rangeRound). In some cases, the distance between two bars is 3 pixels and sometimes only 2. This is because the actual x position is a fractional value and ends up being rounded one way in some cases and the other way on other cases.
You can mitigate the effect but changing rangeRound to range, but that won't eliminate it entirely as you'll still get fractional pixel values for positions. The best thing to do is probably to simply increase the padding so that the differences aren't as obvious.
I've been having issues with this for a little while now. I feel like I should know this but I can't for the life of me remember.
How can I map the screen pixels to their respective 'graphical' x,y positions? The co-ordinate systems have been configured to start at the bottom left (0,0) and increase to the top-right.
I want to be able to zoom, so I know that I need to configure the zoom distance into the answer.
Screen
|\ Some Quad
| \--------|\Qx
| \ Z | \
| \ \|Qy
\ |
Sx\ |Sy
\|
I want to know which pixels on my screen will have the quad on it. Obviously as Z decreases, the quad will occupy more of the screen, and as Z increases it will occupy less, but how exactly are these calculated?
For further clarification, I want to know how I can map these screen pixels onto the 'graphical' co-ordinates using the zoom factor into the equation.
Thanks for any help.
Use the zoom factor as a multiplier against the coordinates and/or screen size.
For example, if you have a 100x150 pixel square, when zoomed in to 150%, the size of the rectangle should be 150x225.
An equation for this is:
h = height
w = width
z = percent zoom
(100% = 1.00)
new width = W = wz
new height = H = hz
To map screen pixels, apply more basic mathematical principles. The relative coordinates depend entirely on the center of the zoom. This is very easy, if everything zooms in the exact center. If zooming from elsewhere (e.g. stretching the object from a corner or a non-central coordinate), you must apply an offset to your equation.
Zooming a rectangle from its center point is easy. Divide the difference in rectangle width by 2, and then add it to the left and right coordinate value (you can add a negative number). Do the same for height.
If zooming the rectangle from a coordinate that is NOT in its exact center, but is still within the bounds of the rectangle, requires an offset. Simply determine what percentage of height and width change should be applied to each side of the rectangle. Sides in closer proximity to the zoom point will receive a lower percentage of the change.
When the zoom point resides outside the rectangle, the distance from the zoom point must also be taken into account. This offset moves the entire rectangle, in addition to scaling the rectangle.
Get a large piece of paper and draw up some visualizations. That always helps. =)
If (xk, yk) is the center before zooming and the size is (Sx, Sy), zoomed to a factor of Z in (0, 1], the new size will be (Qx, Qy) = (Sx*(1-Z), Sy*(1-Z)) centered on (xk, yk) which means the screen coordinates are:
rectangle: xk - Qx/2, yk - Qy/2, xk + Qx/2, yk + Qy/2
Hope that helps.