I am trying to draw a rectangle on top of an image using below code to have nothing filled in the rectangle. That is transparent and only outline of rectangle. But I am not able to do it. Is there a way to achieve that? Thanks.
var gm = require('gm').subClass({imageMagick: true});
var picGm = gm(inputFile)
picGm.drawRectangle(589, 424, 620, 463)
I tried below but it made the rectangle disappear.
picGm.fill("none").drawRectangle(589, 424, 620, 463)
Found a solution
picGm.stroke("#FFFFFF", 0).fill("rgba( 255, 255, 255 , 0 )").drawRectangle(589, 424, 620, 463)
pic.stroke("Red", 0).fill("None")
Related
The problem I have at hand is to draw boundaries around a white ball. But the ball is present in different illuminations. Using canny edge detections and Hough transform for circles, I am able to detect the ball in bright light/partial bright light but not in low illumination.
So can anyone help with this problem.
The code that I have tried is below.
img=cv2.imread('14_04_2018_10_38_51_.8242_P_B_142_17197493.png.png')
cimg=img.copy()
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.medianBlur(img,5)
edges=cv2.Canny(edges,200,200)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,20,
param1=25,param2=10,minRadius=0,maxRadius=0)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,255,255),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imwrite('segmented_out.png',cimg)
else:
print("no circles")
cv2.imwrite('edges_out.png',edges)
In the image below we need to segment if the ball is in the shadow region as well.
The output should be something like below images..
Well I am not very experienced in OpenCV or Python but I am learning as well. Probably not very pythonic piece of code but you could try this:
import cv2
import math
circ=0
n = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220]
img = cv2.imread("ball1.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for i in n:
ret, threshold = cv2.threshold(gray,i,255,cv2.THRESH_BINARY)
im, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for j in range(0, len(contours)):
size = cv2.contourArea(contours[j])
if 500 < size < 5000:
if circ > 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circif = 4*area/(math.pi*(radius*2)**2)
if circif > circ:
circ = float(circif)
radiusx = radius
center = (int(x),int(y))
elif circ == 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circ = 4*area/(math.pi*(radius*2)**2)
else:
pass
cv2.circle(img,center,radiusx,(0,255,0),2)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.detroyAllWindows()
What it does is acctually you convert your picture to grayscale and apply different threshold settings to it. Then you eliminate noises with adding size to your specific contour. When you find it, you check its circularity (NOTE: it is not a scientific formula) and compare it to the next circularity. Perfect circle should return the result 1, so the highest number that will get in a contour (of all the contours) will be your ball.
Result:
NOTE: I haven't tried increasing the limit of size so maybe higher limit could return better result if you have a high resolution picture
Working with grayscale image will make you subject to different light conditions.
To be free from this I suggest to work in HSV color space, then use the Hue component instead of the grayscale image.
Hue is independent from the light condition, since it gives you information about the color, regardless of its Saturation or Value (a value bound to the brightness of the image).
This might bring you some clarity about color spaces and which is best to use for image segmentation.
In your case here. We have a white ball.White is not a color by itself.The main factor here is, what kind light actually falls on the white ballAs the kind of light that falls on it has a direct influence on the kind of extraction you might plan to do using a color space like HSV as mentioned above by #magicleon
HSV is your best bet for segmentation here.Using
whiteObject = cv2.inRange(hsvImage,lowerHSVLimit,upperHSVLimit)
lowerHSVLimit and upperHSVLimit HSV color range
Keeping in mind that the conditions
1) The image have similar conditions while they were clicked
2) You cover all the ranges of HSV before extraction
Hope you get an idea
Consider this example
Selecting a particular hue range from 45 to 60
Code
image = cv2.imread('allcolors.png')
hsvImg = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
lowerHSVLimit = np.array([45,0,0])
upperHSVLimit = np.array([60,255,255])
colour = cv2.inRange(hsvImg,lowerHSVLimit,upperHSVLimit)
plt.subplot(111), plt.imshow(colour,cmap="gray")
plt.title('hue range from 45 to 60'), plt.xticks([]), plt.yticks([])
plt.show()
Here the hue selected from 45 to 60
I am trying to position a shape in a chart in Excel through VBA.
I set the parameters of the position. The result is the shape in a slightly different position.
I have searched the Internet but I have not found a satisfying answer as to why this happens.
I use this code
Set shpRect = Chart1.Shapes.AddShape(msoShapeRectangle, 50, 75, 250, 175)
It generates a rectangle not in the 50, 75 position but in the position 60, 80.
What about positioning the Shape to a certain cell, let's say "C3", after it's created:
Set shpRect = Chart1.Shapes.AddShape(msoShapeRectangle, 50, 75, 250, 175)
With shpRect '<-- modify the shape's position
.Top = Range("C3").Top
.Left = Range("C3").Left
End With
The following code has this result:
local mesh = nil
local img = love.graphics.newImage("test_blue.png")
function love.load()
mesh = love.graphics.newMesh(4, img, "fan")
mesh:setVertices({
{125, 100, 0, 0, 255, 255, 255, 255}, --Top Left
{150, 100, 1, 0, 255, 255, 255, 255}, --Top Right
{200, 400, 1, 1, 255, 255, 255, 255}, --Bottom Right
{100, 400, 0, 1, 255, 255, 255, 255} --Bottom Left
})
end
function love.draw()
love.graphics.draw(mesh, 200, 0)
end
I'd like to know how I could get a result like this:
Without using a 3D library you cannot get a true the depth effect without implementing perspective. The problem is that the polygon is made from 2D triangles and only 2D effects can be applied like shearing or scaling (as a whole). The parallel lines in the texture will always be parallel which is not the case for your bottom image since they converge toward a vanishing point.
For more reading see Perspective Correctness section of Texture Mapping
Changing the coordinate of the texture map can minimize some of the artifacts visually by clipping toward a vanishing point instead of scaling.
Lines in the texture do not have to be parallel if they are part of separate triangles and so adding more triangles allows them to shear toward one another at the cost of more clipping.
Both modifying texture coordinates and using more triangles can be problematic for different styles of texture so you may need to tune it on a case by case basis.
local mesh = nil
local img = love.graphics.newImage("test_blue.png")
function love.load()
mesh = love.graphics.newMesh(5, img, "strip")
local top_left = {125, 100, .125, 0, 255, 255, 255, 255}
local top_right = {150, 100, .875, 0, 255, 255, 255, 255}
local bot_right = {200, 400, 1, 1, 255, 255, 255, 255}
local bot_left = {100, 400, 0, 1, 255, 255, 255, 255}
local bot_mid = {150, 400, .5,1, 255, 255, 255, 255}
mesh:setVertices{
bot_left, top_left, bot_mid, top_right, bot_right,
}
end
function love.draw()
love.graphics.draw(mesh, 200, 0)
end
Math to build shader able to fix this issue is commonly explained on google in many threads, and there are multiple approaches (tag: perspective correct texture mapping).
If you want to build your own shader or use shader from source different than Love2D, mind that Love2D currently uses GLSL v.1.20 with some minor changes.
There is forum thread where you can get completed shader file, currently for Love2D v.0.10.2. Simple use, code commented properly.
https://www.love2d.org/forums/viewtopic.php?f=5&t=12483&start=120
Post by drunken_munki ยป Wed Apr 26, 2017 11:03 am
I am using fabric.JS to draw a line. I can draw a line successfully but the problem is, that whenever I scale that line, the length of the line should only increase and not the width. Any one know any property related to that?
line = new fabric.Line([250, 125, 250, 175], {
left: 170,
top: 150,
strokeWidth: 4,
fill: 'red',
stroke: 'red',
flipX: 'false'
});
line.strokeWidth
is defining your line width, you are doing incorrect scaling.
do this:
var beforeWidth = line.strokeWidth;
//scaling process
line.strokeWidth = beforeWidth;
I'm using Cairo to draw figures. I found that Cairo uses a "absolute coordinate" when drawing. It is a flexible and comfortable way, except specify the line_width. Because of the ratio of the below image is not 1:1, when the "absolute coordinate" converted to "real coordinate", the width of the lines are not same.
WIDTH = 960
HEIGHT = 640
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context(surface)
ctx.scale(WIDTH, HEIGHT)
ctx.rectangle(0, 0, 1, 1)
ctx.set_source_rgb(255, 255, 255)
ctx.fill()
ctx.set_source_rgb(0, 0, 0)
ctx.move_to(0.5, 0)
ctx.line_to(0.5, 1)
ctx.move_to(0, 0.5)
ctx.line_to(1, 0.5)
ctx.set_line_width(0.01)
ctx.stroke()
What is the correct way to make line_width shown as the same ratio in the output image?
Undo your call to ctx.scale() before calling stroke(), for example via:
ctx.save()
ctx.set_line_width(2)
ctx.identity_matrix()
ctx.restore()
(The save()/restore() pair applies all your transformations again afterwards)