c = canvas.Canvas('data.pdf', pagesize= [width*inch,height*inch])
c.drawImage('dataptah',x,y, width,height)
c.save()
I can't height center the picture,
so I need know x and y unit,
or put something.
First of all, you can use units for the x, y, width, and height values in the drawImage call, just as you did for the pagesize. Thus, if you know the aspect ratio of your image, you can calculate these values for the exact centered position.
The reference documentation mentions two other parameters, that could be helpful:
preserveAspectRation=True keeps the aspect ratio of the image even if the box specified with x, y, width, height has a different aspect ratio.
anchor='c' specifies the anchor position of the image, center in this case.
Thus, if you add these two parameters and center the box on the page, then your image should appear centered as well. Here is an example:
c.drawImage('dataptah',
width/4*inch, height/4*inch,
width/2*inch, height/2*inch,
preserveAspectRatio=True, anchor='c'
)
Related
I have a Text in fabricJs. I set top and left.
This sets the aCoords properly to those values.
However the oCoords dont match. And the Text is not displayed at the right position.
I suspect that I need to set to oCoords somehow. So that the Text is displayed at the right pixel coordinates (top & left) on the canvas.
aCoords and oCoords are two different things and should not be in sync.
In your comment you speak about scaled canvas.
Top and Left are 2 absolute values that represent the position of the object on the canvas. This position match with the canvas pixels when the canvas has a identity transform matrix.
If you apply a zoom, this coordinates diverge.
To get the position of pixel 300,100 of the scaled canvas on the unscaled canvas, you need to apply some basic math.
1) get the transform applied to the canvas
canvas.viewportTransform
2) invert it
var iM = fabric.util.invertTransform(canvas.viewportTransform)
3) multiply the wanted point by this matrix
var point = new fabric.Point(myX, myY);
var transformedPoint = fabric.util.transformPoint(point, iM)
4) set the object at that point.
This relates to the "gm" extension for node, http://aheckmann.github.io/gm/docs.html
I need to add some text centered around a bounding box (horizontally is enough). The function drawText() requires x,y coordinates, but there is no way to draw centered text.
I would otherwise need a function which can return the width of a text string in the font/size given, so I can calculate my starting x position in javascript, before calling drawText().
You can use the region and gravity functions this way:
gm(filePath)
.region(WIDTH, HEIGHT, X, Y)
.gravity('Center')
.fill(color)
.fontSize(textFontSize)
.font(font)
.drawText(0, 0, 'This text will be centered inside the region')
I'm using a CIPerspectiveTransformWithExtent filter to apply homographies (perspective warp) to images on OS X. So far, so good, and I can get the desired warping applied to my images.
I'm still struggling however with the border behavior. I would like the output of the filter to be clipped outside the original image domain:
I managed to do it for the lower and left borders by shifting the origin of the inputExtent rectangle of the correct amount. For example, if the lower left corner is projected to x = -10 then using extent.origin.x = 10 will correctly clip the left border;
on the other hand, the upper and right borders are always shown in the output image. For example, if the rightmost corner is projected to x = width + 10, setting the extent via extent.origin.x = 0, extent.size.width = width; does not work an the rightmost corner remains visible.
Am I doing anything wrong here? or maybe I'm not trying the right way to achieve my goal?
please , see following image, here you can see blue rectangle is custom shape bounds and custom shape is shoe , i want to find area of a portion written in image and i want that area in form of rectangle
do is there any path iterator concept ?
Note
custom shape i derived from image of the same size.
I would do it like this:
1.create table for all bounding box-rect perimeter lines
each value in it will represent the empty space length form border line to shape
something like this:
the values are found by simple image scanning until first non space color found
2.now bruteforce find the biggest rectangle area
x,y = top left corner
for xs = 1 to bounding box width
now scan the max valid height of rectangle from x to x + xs (x grows to the right)
// it should be the min y0[x..x+xs]
remember the biggest valid area/size combination
do this for all 4 combinations (star from the other corners)
I now Brute-force is slow but
you can divide perimeter lines not by pixels but with some step instead
also I am sure this can be optimized somehow
for example by derivation of perimeter find the extremes and check from them backwards
when the size will start shrinking then stop ...
of course take in mind that on complicated shapes this optimization will not work ...
I've been having issues with this for a little while now. I feel like I should know this but I can't for the life of me remember.
How can I map the screen pixels to their respective 'graphical' x,y positions? The co-ordinate systems have been configured to start at the bottom left (0,0) and increase to the top-right.
I want to be able to zoom, so I know that I need to configure the zoom distance into the answer.
Screen
|\ Some Quad
| \--------|\Qx
| \ Z | \
| \ \|Qy
\ |
Sx\ |Sy
\|
I want to know which pixels on my screen will have the quad on it. Obviously as Z decreases, the quad will occupy more of the screen, and as Z increases it will occupy less, but how exactly are these calculated?
For further clarification, I want to know how I can map these screen pixels onto the 'graphical' co-ordinates using the zoom factor into the equation.
Thanks for any help.
Use the zoom factor as a multiplier against the coordinates and/or screen size.
For example, if you have a 100x150 pixel square, when zoomed in to 150%, the size of the rectangle should be 150x225.
An equation for this is:
h = height
w = width
z = percent zoom
(100% = 1.00)
new width = W = wz
new height = H = hz
To map screen pixels, apply more basic mathematical principles. The relative coordinates depend entirely on the center of the zoom. This is very easy, if everything zooms in the exact center. If zooming from elsewhere (e.g. stretching the object from a corner or a non-central coordinate), you must apply an offset to your equation.
Zooming a rectangle from its center point is easy. Divide the difference in rectangle width by 2, and then add it to the left and right coordinate value (you can add a negative number). Do the same for height.
If zooming the rectangle from a coordinate that is NOT in its exact center, but is still within the bounds of the rectangle, requires an offset. Simply determine what percentage of height and width change should be applied to each side of the rectangle. Sides in closer proximity to the zoom point will receive a lower percentage of the change.
When the zoom point resides outside the rectangle, the distance from the zoom point must also be taken into account. This offset moves the entire rectangle, in addition to scaling the rectangle.
Get a large piece of paper and draw up some visualizations. That always helps. =)
If (xk, yk) is the center before zooming and the size is (Sx, Sy), zoomed to a factor of Z in (0, 1], the new size will be (Qx, Qy) = (Sx*(1-Z), Sy*(1-Z)) centered on (xk, yk) which means the screen coordinates are:
rectangle: xk - Qx/2, yk - Qy/2, xk + Qx/2, yk + Qy/2
Hope that helps.