Flip a DICOM Image over the x = y Line - vtk

I am working on displaying a DICOM image. But it requires me to flip the incoming DICOM image about the x = y line. In other words, I want to rotates the image about the x = y axis with 180 degree
I have found setFlipOverOrigin() from vtkImageFlip. However, it seems not working. Could anyone suggest me a method or how to use setFlipOverOrigin() correctly if it helps.
Thanks in advance.

Try using vtkTransform class, and apply a 180 degrees rotation around the axe (1, 1, 0) => x = y = 1 ; z = 0
void vtkTransform::RotateWXYZ (double angle, double x, double y, double z );
Create a rotation matrix and concatenate it with the current
transformation according to PreMultiply or PostMultiply semantics. The
angle is in degrees, and (x,y,z) specifies the axis that the rotation
will be performed around.
vtkSmartPointer<vtkTransform> rotation = vtkSmartPointer<vtkTransform>::New();
rotation->RotateWXYZ (180, 1.0, 1.0, 0);
// rotation->setInputConnection( DicomReaderImage->GetOutputPort () ); // link your image into your pipeline
rotation->Update ();

Related

create a 3d cylinder inside 3d volume

I have 3d volume. Which has shape of (399 x 512 x 512). And It has voxel spacing of 0.484704 x 0.484704 x 0.4847
Now, I want to define a cylinder inside this volume with length 5mm, diameter 1mm, intensity 1 inside, intensity 0 outside.
I saw an example to define a cylinder in internet like this code:
from mpl_toolkits.mplot3d import Axes3D
def data_for_cylinder_along_z(center_x,center_y,radius,height_z):
z = np.linspace(0, height_z, 50)
theta = np.linspace(0, 2*np.pi, 50)
theta_grid, z_grid=np.meshgrid(theta, z)
x_grid = radius*np.cos(theta_grid) + center_x
y_grid = radius*np.sin(theta_grid) + center_y
return x_grid,y_grid,z_grid
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
Xc,Yc,Zc = data_for_cylinder_along_z(0.2,0.2,0.05,0.1)
ax.plot_surface(Xc, Yc, Zc, alpha=0.5)
plt.show()
However, I don't know how to define the cylinder inside the 3d volume keeping all the conditions (length 5mm, diameter 1mm, intensity 1 inside, intensity 0 outside) true. I also want to define the center of cylinder automatically. So that I can define the cylinder at any place of inside the 3d volume keeping the other condition true. Can anyone show or provide any example?
Thanks a lot in advance.
One simple way of solving this would be to perform each of the checks individually and then just keep the voxels that satisfy all of your constraints.
If you build a grid with all of the centers of the voxels: P (399 x 512 x 512 x 3), each voxel at (i,j,k) will be associated with its real-world position (x,y,z).
That's a little tricky, but it should look something like this:
np.stack(np.meshgrid(np.arange(0, shape[0]),
np.arange(0, shape[1]),
np.arange(0, shape[2]), indexing='ij'), axis=3)
If you subtract the cylinder's center (center_x,center_y, center_z), you're left with the relative positions of each (i,j,k) voxel P_rel (399 x 512 x 512 x 3)
When you have that, you can apply each of your tests one after the other. For a Z-oriented cylinder with a radius and height_z it would look something like:
# constrain the Z-axis
not_too_high = P_rel[:,:,:,2]<= (0.5*height_z)
not_too_low = P_rel[:,:,:,2]>= (-0.5*height_z)
# constrain the radial direction
not_too_far = np.linalg.norm(P_rel[:,:,:,:2],axis=3)<=radius
voxels_in_cyl = not_too_high & not_too_low & not_too_far
I haven't tested the code, but you get the idea.
If you wanted to have an cylinder with an arbitrary orientation you would have to project P_rel into axial and radial components and then do an analogous check without "hard-coding" the indices as I did in this example

How to properly filter this image using OpenCV in Python

I have a computer vision project where I need to recognise digits on some totems which contain direction signages, an example image is here :
So I tried many methods such as taking laplacian,
img = cv2.imread(imgpath)
img = cv2.resize(img,(600,600))
imaj = cv2.GaussianBlur(img,(11,11),0)
imaj = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
laplas = cv2.Laplacian(imaj,cv2.CV_16SC1,ksize=5,scale=5,delta=1)
After applying this, I get a pretty good distinguish between background and the digits but since the output becomes 16SC1, I can not take the contours in the image. I tried thresholding this, but still can not get anything clear out of it.
That's what i get after thresholding from range(5000,8000) and converting it to uint8,
In the end, I try to take contours from it with this code here :
def drawcntMap(filteredimg):
"""
Draws bounding boxes on the contour map for each image
"""
contour = cv2.findContours(filteredimg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[1]
digitCnts = []
# draw bounding boxes for contour areas, then filter them approximately on
# digit size and append filtered ones to a list
for c in contour:
(x, y, w, h) = cv2.boundingRect(c)
if w >= 7 and h >= 10 and w <=50 and h <=70:
digitCnts.append(c)
#create another contour map with filtered contour areas
cnt2 = cv2.drawContours(img.copy(), digitCnts, -1, (0,0,255),2)
#draw bounding boxes again on the filtered contour map.
for c in digitCnts:
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(cnt2, (x, y), (x+w, y+h), (0,255,0), 2)
return cnt2,digitCnts
result would be :
How can I improve my solutions for this task and get all the digits? Besides laplacian filter, I tried darkening the image by lowering contrast and extracting white color regions (It did kinda good but still couldn't get all the digits), I tried gaussian blur and canny edge but at some places where the totem and the background are in the same pixel value contours merged.

How to get the x, y coordinates of center of a contour area and move the mouse there or store the coordinates

I use the ImageGrab bbox to capture a part of the screen (real time video), I make a mask for contours, I use moments to get the center of the contour area, until now works fine.
Now I need to get x, y of the center of the contour when a event is triggered and save the coordinates, and when the center of the contour pass the same coordinates again trigger another event.
for cnt in contours:
(x, y, w, h) = cv2.boundingRect(cnt)
area = cv2.contourArea(cnt)
if area > 500:
# I want to know the coordinates of intersection of the lines
cv2.rectangle(# rectangle on the contour area)
cv2.line( # horizontal line)
cv2.line(# vertical line)
# I want to follow the coordinates of the cv2.circle on screen
M = cv2.moments(cnt)
X = int(M["m10"] / M["m00"])
Y = int(M["m01"] / M["m00"])
cv2.circle(frame, (X, Y), 13, (255, 255, 255), -1)
The x, y of the cv2.moments give 2 int values, but if send the mouse move to that values, the mouse is not follow the cv2.circle on the screen, is way off.

How to visualize feasible region for linear programming (with arbitrary inequalities) in Numpy/MatplotLib?

I need to implement a solver for linear programming problems. All of the restrictions are <= ones such as
5x + 10y <= 10
There can be an arbitrary amount of these restrictions. Also , x>=0 y>=0 implicitly.
I need to find the optimal solutions(max) and show the feasible region in matplotlib. I've found the optimal solution by implementing the simplex method but I can't figure out how to draw the graph.
Some approaches I've found:
This link finds the minimum of the y points from each function and uses plt.fillBetween() to draw the region. But it doesn't work when I change the order of the equations. I'm not sure which y values to minimize(). So I can't use it for arbitrary restrictions.
Find solution for every pair of restrictions and draw a polygon. Not efficient.
An easier approach might be to have matplotlib compute the feasible region on its own (with you only providing the constraints) and then simply overlay the "constraint" lines on top.
# plot the feasible region
d = np.linspace(-2,16,300)
x,y = np.meshgrid(d,d)
plt.imshow( ((y>=2) & (2*y<=25-x) & (4*y>=2*x-8) & (y<=2*x-5)).astype(int) ,
extent=(x.min(),x.max(),y.min(),y.max()),origin="lower", cmap="Greys", alpha = 0.3);
# plot the lines defining the constraints
x = np.linspace(0, 16, 2000)
# y >= 2
y1 = (x*0) + 2
# 2y <= 25 - x
y2 = (25-x)/2.0
# 4y >= 2x - 8
y3 = (2*x-8)/4.0
# y <= 2x - 5
y4 = 2 * x -5
# Make plot
plt.plot(x, 2*np.ones_like(y1))
plt.plot(x, y2, label=r'$2y\leq25-x$')
plt.plot(x, y3, label=r'$4y\geq 2x - 8$')
plt.plot(x, y4, label=r'$y\leq 2x-5$')
plt.xlim(0,16)
plt.ylim(0,11)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
This is a vertex enumeration problem. You can use the function lineqs which visualizes the system of inequalities A x >= b for any number of lines. The function will also display the vertices on which the graph was plotted.
The last 2 lines mean that x,y >=0
from intvalpy import lineqs
import numpy as np
A = -np.array([[5, 10],
[-1, 0],
[0, -1]])
b = -np.array([10, 0, 0])
lineqs(A, b, title='Solution', color='gray', alpha=0.5, s=10, size=(15,15), save=False, show=True)
Visual Solution Link

how to write rectangle (bounding box) by xmax xmin ymax ymin using opencv

I found that I cant easely write bounding box using 4 points (x, y, w, h) using opencv. Where x, y is top left corner and w=width, h=height.
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),15)
But How is it possible to write bounding box using opencv having only xmax xmin ymax ymin points? I need to check that all is allright in my code and bounding boxes used by x, y, w, h is completely equal to bounding boxes that I have under xmax xmin ymax ymin.
I converted x, y, w, h to xmax xmin ymax ymin using these code
bbox_topleft_corner_x = int(prod_data[0])
bbox_topleft_corner_y = int(prod_data[1])
bbox_w = int(prod_data[2])
bbox_h = int(prod_data[3])
ymax = bbox_topleft_corner_y
ymin = bbox_topleft_corner_y - bbox_h
xmax = bbox_topleft_corner_x + bbox_w
xmin = ymin + bbox_w
But I'm not sure that I did all as I wanted. I wanted to convert x, y, w, h to VOOC2007 annotation xml format and their bounding box format
Thanks for any advice
Given x, y, width, and height, it should be trivial to get x_max and y_max.
x_max = x + width
y_max = y + height
It is important to remember the coordinate system for images starts with (0, 0) in the top left, and (image_width, image_height) in the bottom right. Therefore:
top_left = (x, y)
bottom_right = (x+w, y+h)
The last thing to remember is that there are some cases were the parameter requested is a point (x, y), such as the case in the cv2.rectangle function. However, pixels are accessed as the underlying ndarray structure image[row, column]
Check out this question for more info about opencv coordinate systems.
I guess your problem is the reference system.
In a image the point (0,0) is the top left pixel. Judging from your ymin calculation it seems you're considering y "the upper is the higher" but with an origin in top-left point is exactly the opposite.

Resources