VTK: why the vtkImageBlend result is different from RadiAnt - vtk

Let's say I have two image, and the artery (the red arrow) is about the same position:
Now, I need to show the two image in one figure, and I use the vtkImageBlend for this purpose. My code is:
import vtk
img1 = vtk.vtkDICOMImageReader()
img1.SetFileName('C:\\Users\\MLoong\\Desktop\\dicom_data\\Chang Cheng\\TOF\\IM_0198')
img1.Update()
print('img1: ', img1.GetOutput().GetSpacing())
print('img1: ', img1.GetOutput().GetExtent())
img2 = vtk.vtkDICOMImageReader()
img2.SetFileName('C:\\Users\\MLoong\\Desktop\\dicom_data\\Chang Cheng\\SNAP\\IM_0502')
img2.Update()
print('img2: ', img2.GetOutput().GetSpacing())
print('img2: ', img2.GetOutput().GetExtent())
image_blender = vtk.vtkImageBlend()
image_blender.AddInputConnection(img1.GetOutputPort())
image_blender.AddInputConnection(img2.GetOutputPort())
image_blender.SetOpacity(0, 0.1)
image_blender.SetOpacity(1, 0.9)
image_blender.Update()
imageActor = vtk.vtkImageActor()
windowLevel = vtk.vtkImageMapToWindowLevelColors()
imageActor.GetMapper().SetInputConnection(windowLevel.GetOutputPort())
ren = vtk.vtkRenderer()
ren.AddActor(imageActor)
ren.SetBackground(0.1, 0.2, 0.4)
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
renWin.SetSize(400, 400)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
windowLevel.SetInputData(image_blender.GetOutput())
windowLevel.Update()
renWin.Render()
iren.Start()
And the result is:
In the above figure, the img2 is about half ot the img1.
Hoever, the printed information is:
img1: (0.3571428656578064, 0.3571428656578064, 1.399999976158142)
img1: (0, 559, 0, 559, 0, 0)
img2: (0.5, 0.5, 1.0)
img2: (0, 319, 0, 319, 0, 0)
For img1, the extent is (560, 560), and the spacing is (0.357, 0.357). Thus, the FOV is: 0.357*560=200, and the FOV of img2 is 160. Thus, I think the blend figure may be wrong.
Moreover, the RadiAnt also provide the fusion figure:
In the RadiAnt fusion figure, the artery of two images is overlay, which is what I want.
Is there anything wrong with my vtkImageBlend code?

From the Detailed Description of vtkImageBlend:
"blend images together using alpha or opacity
vtkImageBlend takes L, LA, RGB, or RGBA images as input and blends them according to the alpha values and/or the opacity setting for each input.
The spacing, origin, extent, and number of components of the output are the same as those for the first input."
So, you need to make sure to have the same spacing, extent and origin.
The way to solve it is to create new 2 vtkImageData with the same properties and copy your existing data to the new images.
Here I handle extent:
private static void AdjustImage(ref vtkImageData oldImage, int[] newDimensions)
{
vtkImageData newImage = new vtkImageData();
vtkInformation info = new vtkInformation();
int[] oldDimensions = oldImage.GetDimensions();
double[] spacing = oldImage.GetSpacing();
newImage.SetDimensions(newDimensions[0], newDimensions[1], newDimensions[2]);
vtkImageData.SetScalarType(4/*VTK_SHORT*/, info);
vtkImageData.SetNumberOfScalarComponents(1, info);//The components that each pixel needs to represent =1 is the index quantity map
newImage.AllocateScalars(info);//It is very important to allocate memory and generate image data. After the image is generated, the default value of all pixels is 0
newImage.SetSpacing(spacing[0], spacing[1], spacing[2]);
vtkImageData data = oldImage;
Parallel.For(0, newDimensions[0], i =>
{
if (i < oldDimensions[0])
{
Parallel.For(0, newDimensions[1], j =>
{
if (j < oldDimensions[1])
{
Parallel.For(0, newDimensions[2], k =>
{
if (k < oldDimensions[2])
{
newImage.SetScalarComponentFromDouble(i, j,
newDimensions[2] - 1 - k, 0,
data.GetScalarComponentAsDouble(i, j,
oldDimensions[2] - 1 - k, 0));
}
else
{
SetImageToDefault(newImage, newDimensions, i, j, k);
}
});
}
else
{
SetImageToDefault(newImage, newDimensions, i, j);
}
});
}
else
{
SetImageToDefault(newImage, newDimensions, i);
}
});
oldImage.Dispose();
oldImage = newImage;
}
private static void SetImageToDefault(vtkImageData img, int[] imageDimensions, int i, int j, int k)
{
const double transparentHu = -1000;
img.SetScalarComponentFromDouble(i, j, imageDimensions[2] - 1 - k, 0, transparentHu);
}
Afterward you will need to translate the second image by the delta of the differences between the 2 images origins.

Related

Processing: how to make box() appear solid (non-transparent) in 3d mode

I'm trying to create layers of 3d boxes in Processing. I want them to appear solid, so that you can't see the boxes "behind" other boxes, but the way they're displaying makes them seem transparent; you can see the stroke of boxes behind other boxes. How do I make them appear solid?
// number of boxes
int numBox = 300;
// width of each box
int boxWidth = 30;
// number of boxes per row
float numPerRow;
void setup() {
size(800, 800, P3D);
pixelDensity(1);
colorMode(HSB, 360, 100, 100, 100);
background(40, 6, 85);
stroke(216, 0, 55);
smooth(4);
fill(0, 0, 90, 100);
numPerRow = width / boxWidth;
}
void draw() {
background(40, 6, 85);
translate((boxWidth / 2), 100);
rotateX(-PI/6);
rotateY(PI/8);
for (int i = 0; i < numBox; i++) {
drawBox(i);
if (i == numBox - 1) {
noLoop();
}
}
}
void drawBox(int i) {
if ((i % 2) == 0) {
pushMatrix();
translate(((boxWidth / 2) * i) % width, 20 * floor(i / (2 * numPerRow)));
translate(0, -((i % 30) / 2));
box(boxWidth, i % 30, boxWidth);
popMatrix();
};
}
Close-up of how the boxes are being displayed:
The issue is that the boxes are intersecting and the strokes of these intersecting boxes are what give the appearance of "see through".
I'm noticing you are using x and y translation, but not z.
If you don't plan to increase x, y spacing to avoid intersections, you can easily offset rows on the z axis so rows of boxes appear in front of each other.
Here's a slightly modified version of your code illustrating this idea:
// number of boxes
int numBox = 300;
// width of each box
int boxWidth = 30;
// number of boxes per row
float numPerRow;
void setup() {
size(800, 800, P3D);
pixelDensity(1);
colorMode(HSB, 360, 100, 100, 100);
background(40, 6, 85);
stroke(216, 0, 55);
smooth(4);
fill(0, 0, 90, 100);
numPerRow = width / boxWidth;
}
void draw() {
background(40, 6, 85);
translate((boxWidth / 2), 100);
if(mousePressed){
rotateX(map(mouseY, 0, height, -PI, PI));
rotateY(map(mouseX, 0, width, PI, -PI));
}else{
rotateX(-PI/6);
rotateY(PI/8);
}
for (int i = 0; i < numBox; i++) {
drawBox(i);
//if (i == numBox - 1) {
// noLoop();
//}
}
}
void drawBox(int i) {
if ((i % 2) == 0) {
pushMatrix();
float x = ((boxWidth / 2) * i) % width;
float y = 20 * floor(i / (2 * numPerRow));
float z = y * 1.5;
translate(x, y, z);
translate(0, -((i % 30) / 2));
box(boxWidth, i % 30, boxWidth);
popMatrix();
};
}
(Click and drag to rotate and observe the z offset.
Feel free to make z as interestersting as you need it it.
Nice composition and colours!
(framing (window size) could use some iteration/tweaking, but I'm guessing this is WIP))

Mapping RGB data to values in legend

This is a follow-up to my previous question here
I've been trying to convert the color data in a heatmap to RGB values.
source image
In the below image, to the left is a subplot present in panel D of the source image. This has 6 x 6 cells (6 rows and 6 columns). On the right, we see the binarized image, with white color highlighted in the cell that is clicked after running the code below. The input for running the code is the below image. The ouput is(mean = [ 27.72 26.83 144.17])is the mean of BGR color in the cell that is highlighted in white on the right image below.
A really nice solution that was provided as an answer to my previous question is the following (ref)
import cv2
import numpy as np
# print pixel value on click
def mouse_callback(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
# get specified color
row = y
column = x
color = image[row, column]
print('color = ', color)
# calculate range
thr = 20 # ± color range
up_thr = color + thr
up_thr[up_thr < color] = 255
down_thr = color - thr
down_thr[down_thr > color] = 0
# find points in range
img_thr = cv2.inRange(image, down_thr, up_thr) # accepted range
height, width, _ = image.shape
left_bound = x - (x % round(width/6))
right_bound = left_bound + round(width/6)
up_bound = y - (y % round(height/6))
down_bound = up_bound + round(height/6)
img_rect = np.zeros((height, width), np.uint8) # bounded by rectangle
cv2.rectangle(img_rect, (left_bound, up_bound), (right_bound, down_bound), (255,255,255), -1)
img_thr = cv2.bitwise_and(img_thr, img_rect)
# get points around specified point
img_spec = np.zeros((height, width), np.uint8) # specified mask
last_img_spec = np.copy(img_spec)
img_spec[row, column] = 255
kernel = np.ones((3,3), np.uint8) # dilation structuring element
while cv2.bitwise_xor(img_spec, last_img_spec).any():
last_img_spec = np.copy(img_spec)
img_spec = cv2.dilate(img_spec, kernel)
img_spec = cv2.bitwise_and(img_spec, img_thr)
cv2.imshow('mask', img_spec)
cv2.waitKey(10)
avg = cv2.mean(image, img_spec)[:3]
mean.append(np.around(np.array(avg), 2))
print('mean = ', np.around(np.array(avg), 2))
# print(mean) # appends data to variable mean
if __name__ == '__main__':
mean = [] #np.zeros((6, 6))
# create window and callback
winname = 'img'
cv2.namedWindow(winname)
cv2.setMouseCallback(winname, mouse_callback)
# read & display image
image = cv2.imread('ip2.png', 1)
#image = image[3:62, 2:118] # crop the image to 6x6 cells
#---- resize image--------------------------------------------------
# appended this to the original code
print('Original Dimensions : ', image.shape)
scale_percent = 220 # percent of original size
width = int(image.shape[1] * scale_percent / 100)
height = int(image.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
# ----------------------------------------------------------------------
cv2.imshow(winname, image)
cv2.waitKey() # press any key to exit
cv2.destroyAllWindows()
What do I want to do next?
The mean of the RGB values thus obtained has to be mapped to the values in the following legend provided in the source image,
I would like to ask for suggestions on how to map the RGB data to the values in the legend.
Note: In my previous post it has been suggested that one could
fit the RGB values into an equation which gives continuous results.
Any suggestions in this direction will also be helpful.
EDIT:
Answering the comment below
I did the following to measure the RGB values of legend
Input image:
This image has 8 cells in columns width and 1 cell in rows height
Changed these lines of code:
left_bound = x - (x % round(width/8)) # 6 replaced with 8
right_bound = left_bound + round(width/8) # 6 replaced with 8
up_bound = y - (y % round(height/1)) # 6 replaced with 1
down_bound = up_bound + round(height/1) # 6 replaced with 1
Mean obtained for each cell/ each color in legend from left to right:
mean = [ 82.15 174.95 33.66]
mean = [45.55 87.01 17.51]
mean = [8.88 8.61 5.97]
mean = [16.79 17.96 74.46]
mean = [ 35.59 30.53 167.14]
mean = [ 37.9 32.39 233.74]
mean = [120.29 118. 240.34]
mean = [238.33 239.56 248.04]
You can try to apply piece wise approach, make pair wise transitions between colors:
c[i->i+1](t)=t*(R[i+1],G[i+1],B[i+1])+(1-t)*(R[i],G[i],B[i])
Do the same for these values:
val[i->i+1](t)=t*val[i+1]+(1-t)*val[i]
Where i - index of color in legend scale, t - parameter in [0:1] range.
So, you have continuous mapping of 2 values, and just need to find color parameters i and t closest to sample and find value from mapping.
Update:
To find the color parameters you can think about every pair of neighbour legend colors as a pair of 3d points, and your queried color as external 3d point. Now you just meed to find a length of perpendicular from the external point to a line, then, iterating over legend color pairs, find the shortest perpendicular (now you have i).
Then find intersection point of the perpendicular and the line. This point will be located at the distance A from line start and if line length is L then parameter value t=A/L.
Update2:
Simple brutforce solution to illustrate piece wise approach:
#include "opencv2/opencv.hpp"
#include <string>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
Mat Image=cv::Mat::zeros(100,250,CV_32FC3);
std::vector<cv::Scalar> Legend;
Legend.push_back(cv::Scalar(82.15,174.95,33.66));
Legend.push_back(cv::Scalar(45.55, 87.01, 17.51));
Legend.push_back(cv::Scalar(8.88, 8.61, 5.97));
Legend.push_back(cv::Scalar(16.79, 17.96, 74.46));
Legend.push_back(cv::Scalar(35.59, 30.53, 167.14));
Legend.push_back(cv::Scalar(37.9, 32.39, 233.74));
Legend.push_back(cv::Scalar(120.29, 118., 240.34));
Legend.push_back(cv::Scalar(238.33, 239.56, 248.04));
std::vector<float> Values;
Values.push_back(-4);
Values.push_back(-2);
Values.push_back(0);
Values.push_back(2);
Values.push_back(4);
Values.push_back(8);
Values.push_back(16);
Values.push_back(32);
int w = 30;
int h = 10;
for (int i = 0; i < Legend.size(); ++i)
{
cv::rectangle(Image, Rect(i * w, 0, w, h), Legend[i]/255, -1);
}
std::vector<cv::Scalar> Smooth_Legend;
std::vector<float> Smooth_Values;
for (int i = 0; i < Legend.size()-1; ++i)
{
cv::Scalar c1 = Legend[i];
cv::Scalar c2 = Legend[i + 1];
float v1 = Values[i];
float v2 = Values[i+1];
for (int j = 0; j < w; ++j)
{
float t = (float)j / (float)w;
Scalar c = c2 * t + c1 * (1 - t);
float v = v2 * t + v1 * (1 - t);
float x = i * w + j;
line(Image, Point(x, h), Point(x, h + h), c/255, 1);
Smooth_Values.push_back(v);
Smooth_Legend.push_back(c);
}
}
Scalar qp = cv::Scalar(5, 0, 200);
float d_min = FLT_MAX;
int ind = -1;
for (int i = 0; i < Smooth_Legend.size(); ++i)
{
float d = cv::norm(qp- Smooth_Legend[i]);
if (d < d_min)
{
ind = i;
d_min = d;
}
}
std::cout << Smooth_Values[ind] << std::endl;
line(Image, Point(ind, 3 * h), Point(ind, 4 * h), Scalar::all(255), 2);
circle(Image, Point(ind, 4 * h), 3, qp/255,-1);
putText(Image, std::to_string(Smooth_Values[ind]), Point(ind, 70), FONT_HERSHEY_DUPLEX, 1, Scalar(0, 0.5, 0.5), 0.002);
cv::imshow("Legend", Image);
cv::imwrite("result.png", Image*255);
cv::waitKey();
}
The result:
Python:
import cv2
import numpy as np
height=100
width=250
Image = np.zeros((height, width,3), np.float)
legend = np.array([ (82.15,174.95,33.66),
(45.55,87.01,17.51),
(8.88,8.61,5.97),
(16.79,17.96,74.46),
( 35.59,0.53,167.14),
( 37.9,32.39,233.74),
(120.29,118.,240.34),
(238.33,239.56,248.04)], np.float)
values = np.array([-4,-2,0,2,4,8,16,32], np.float)
# width of cell, also defines number
# of one segment transituin subdivisions.
# Larger values will give more accuracy, but will woek slower.
w = 30
# Only fo displaying purpose. Height of bars in result image.
h = 10
# Plot legend cells ( to check correcrness only )
for i in range(len(legend)):
col=legend[i]
cv2.rectangle(Image, (i * w, 0, w, h), col/255, -1)
# Start form smoorhed scales for color and according values
Smooth_Legend=[]
Smooth_Values=[]
for i in range(len(legend)-1): # iterate known knots
c1 = legend[i] # start color point
c2 = legend[i + 1] # end color point
v1 = values[i] # start value
v2 = values[i+1] # emd va;ie
for j in range(w): # slide inside [start:end] interval.
t = float(j) / float(w) # map it to [0:1] interval
c = c2 * t + c1 * (1 - t) # transition between c1 and c2
v = v2 * t + v1 * (1 - t) # transition between v1 and v2
x = i * w + j # global scale coordinate (for drawing)
cv2.line(Image, (x, h), (x, h + h), c/255, 1) # draw one tick of smoothed scale
Smooth_Values.append(v) # append smoothed values for next step
Smooth_Legend.append(c) # append smoothed color for next step
# queried color
qp = np.array([5, 0, 200])
# initial value for minimal distance set to large value
d_min = 1e7
# index for clolor search
ind = -1
# search for minimal distance from queried color to smoothed scale color
for i in range(len(Smooth_Legend)):
# distance
d = cv2.norm(qp-Smooth_Legend[i])
if (d < d_min):
ind = i
d_min = d
# ind contains index of the closest color in smoothed scale
# and now we can extract according value from smoothed values scale
print(Smooth_Values[ind]) # value mapped to queried color.
# plot pointer (to check ourself)
cv2.line(Image, (ind, 3 * h), (ind, 4 * h), (255,255,255), 2);
cv2.circle(Image, (ind, 4 * h), 3, qp/255,-1);
cv2.putText(Image, str(Smooth_Values[ind]), (ind, 70), cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0.5, 0.5), 1);
# show window
cv2.imshow("Legend", Image)
# save to file
cv2.imwrite("result.png", Image*255)
cv2.waitKey()

Apply filters to Hough Line Detection

In my application, I use Hough Line Detection to detect lines inside an image. What I'm trying to do is to retrieve only the lines that compose the border and the corners of each square of the chessboard. How can I apply filters to obtain a clear view of the lines?
My idea is to apply filters to check the angle between each line(90 degrees) or the distance to get only the lines that count. The final goal will be to obtain the intersection between these lines to get the coordinates of each square.
Code:
chessBoard = cv2.imread('img.png')
gray = cv2.cvtColor(chessBoard,cv2.COLOR_BGR2GRAY)
dst = cv2.Canny(gray, 50, 200)
lines= cv2.HoughLines(dst, 1, math.pi/180.0, 100, np.array([]), 0, 0)
a,b,c = lines.shape
for i in range(a):
rho = lines[i][0][0]
theta = lines[i][0][1]
a = math.cos(theta)
b = math.sin(theta)
x0, y0 = a*rho, b*rho
pt1 = ( int(x0+1000*(-b)), int(y0+1000*(a)) )
pt2 = ( int(x0-1000*(-b)), int(y0-1000*(a)) )
cv2.line(chessBoard, pt1, pt2, (0, 255, 0), 2, cv2.LINE_AA)

Opencv fitellipse draws the wrong contour

I want to draw the ellipse contour around the given figure append below. I am not getting the correct result since the figure consist of two lines.
I have tried the following:-
Read the Image
Convert the BGR to HSV
Define the Range of color blue
Create the inRange Mask to capture the value of between lower and upper blue
Find the contour & Draw the fit ellipse.
Here is the source code-
import cv2
import numpy as np
image=cv2.imread('./source image.jpg')
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower_blue= np.array([75, 0, 0])
upper_blue= np.array([105, 255, 255])
mask = cv2.inRange(hsv, lower_blue, upper_blue)
res=cv2.bitwise_and(image,image,mask=mask)
_,contours,_=cv2.findContours(close,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(max(contours,key=cv2.contourArea))
cv2.ellipse(image,ellipse,(0,255,0),2)
cv2.imshow('mask',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The figure/Image below show the Expected & Actual Output-
Expected & Actual display image
Source Image
Source Image
Output Contour Array
Contour file
I try to run your code on C++ and add erosion, dilatation and convexHull for result contour:
auto DetectEllipse = [](cv::Mat rgbImg, cv::Mat hsvImg, cv::Scalar fromColor, cv::Scalar toColor)
{
cv::Mat threshImg;
cv::inRange(hsvImg, fromColor, toColor, threshImg);
cv::erode(threshImg, threshImg, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 2);
cv::dilate(threshImg, threshImg, cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 2);
std::vector<std::vector<cv::Point> > contours;
cv::findContours(threshImg, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
int areaThreshold = (rgbImg.cols * rgbImg.rows) / 100;
std::vector<cv::Point> allContours;
allContours.reserve(10 * areaThreshold);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
auto area = cv::contourArea(contours[i]);
if (area > areaThreshold)
{
allContours.insert(allContours.end(), contours[i].begin(), contours[i].end());
}
}
}
if (allContours.size() > 4)
{
std::vector<cv::Point> hull;
cv::convexHull(allContours, hull, false);
cv::ellipse(rgbImg, cv::fitEllipse(hull), cv::Scalar(255, 0, 255), 2);
}
};
cv::Mat rgbImg = cv::imread("h8gx3.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
DetectEllipse(rgbImg, hsvImg, cv::Scalar(75, 0, 0), cv::Scalar(105, 255, 255));
DetectEllipse(rgbImg, hsvImg, cv::Scalar(10, 100, 20), cv::Scalar(25, 255, 255));
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);
Result looks correct:

Draw border around image using pixi.js

In JS we can do this:
var ctx = canvas.getContext('2d'),
img = new Image;
img.onload = draw;
img.src = "http://i.stack.imgur.com/UFBxY.png";
function draw() {
var dArr = [-1,-1, 0,-1, 1,-1, -1,0, 1,0, -1,1, 0,1, 1,1], // offset array
s = 2, // thickness scale
i = 0, // iterator
x = 5, // final position
y = 5;
// draw images at offsets from the array scaled by s
for(; i < dArr.length; i += 2)
ctx.drawImage(img, x + dArr[i]*s, y + dArr[i+1]*s);
// fill with color
ctx.globalCompositeOperation = "source-in";
ctx.fillStyle = "red";
ctx.fillRect(0,0,canvas.width, canvas.height);
// draw original image in normal mode
ctx.globalCompositeOperation = "source-over";
ctx.drawImage(img, x, y);
}
And it will draw a border around your image (PNG so it's not a rectangle border ).
How todo it in Pixi.js? Pixi.js doesn't seem to understand the transparent bits as well.

Resources