Why is my modified Pol01 running in batch mode works for all polarized photon states except for left circularly polarized photons? - geant4

I am attempting to use Pol01 example to examine the angular distribution of initially polarized photons Compton scattering off G4_Si.
I have had to include several lines of code in the SteppingAction.cc which will print out information like, for example, the Compton scattering angle and the photon's final kinetic energy.
The simulations seems to work well for horizontally and vertically polarized photons, linear polarized +- 45 degrees and right circularly polarized (RCP) photons. But when I try and run the simulation for left circuarly polarized (LCP) photons I get a segmentation fault part way through the printing of the information to my terminal window.
In macro code this is what I found
### RCP (works)
# /gun/polarization 0. 0. 1.
### LCP ( Segmentation fault )
/gun/polarization 0. 0. -1.
### vertical ( works )
# /gun/polarization -1. 0. 0.
### horizontal ( works )
# /gun/polarization 1. 0. 0.
### LP ( 45 degrees ) (works)
# /gun/polarization 0. 1. 0.
### LP ( -45 degrees ) ( works )
# /gun/polarization 0. -1. 0.
/gun/particle gamma
#
/gun/energy 511 keV
However, above this code there is the following instruction
/polarization/volume/set theBox 0. 0. 0.08
If I set 0.08 to 0. , then I do not get the segmentation fault. I would like help in figuring out why only LCP photons cause the segmentation fault?
FYI, the only extra code I included into Pol01 (in SteppingAction.cc) is the following
// Inside this method you should access the track from the step object:
G4Track *aTrack = aStep->GetTrack();
// Then you should get the KineticEnergy of the track if the process is
// "Compton scattering" and if the volume remains the same in the next step.
// See the code lines below:
G4StepPoint* preStepPoint = aStep->GetPreStepPoint();
const G4VProcess* CurrentProcess = preStepPoint->GetProcessDefinedStep();
if (CurrentProcess != 0)
{
// To implement the next line, one needs to include " #include "G4VProcess.hh" "
// (See SteppingAction.cc in TestEm6)
const G4String & StepProcessName = CurrentProcess->GetProcessName();
// Get track ID for current processes
G4double TrackID = aStep->GetTrack()->GetTrackID();
if( (StepProcessName == "pol-compt")&&(TrackID==1) )
{
// processing hit when entering the volume
G4double kineticEnergy = aStep->GetTrack()->GetKineticEnergy();
auto EkinUnits = G4BestUnit(kineticEnergy, "Energy");
// Get energy in units of keV
G4double En = kineticEnergy/keV;
// Determine angle in radians
G4double thetaRad = std::acos( 2- 511/En );
// Place condition to reject Compton scattering at 0 rad.
// (This is to avoid nan results when dividing through by 2pisin(theta))
if ( (thetaRad > 0) && (thetaRad < pi))
{
// fill histogram id = 13 (angular spread (in radians) as a function of scattering angle)
// analysisManager->FillH1(13, thetaRad);
// get particle name that's undergone the Compton scattering
G4String ParticleName = aStep->GetTrack()->GetParticleDefinition()->GetParticleName();
// Find position (c.f. /examples/extended/medical/GammaTherapy) in CheckVolumeSD.cc
// G4ThreeVector pos = aStep->GetPreStepPoint()->GetPosition();
// G4double z = pos.z();
// Check if volume is sensitive detector
G4String fromVolume = aTrack->GetNextVolume()->GetName();
// see https://geant4-forum.web.cern.ch/t/get-particle-info-in-steppingaction-if-particle-pass-through-specific-volume/2863/2
// for possible solution for finding which volume the gamma is in
// G4String SDname = aStep->GetPostStepPoint()->GetSensitiveDetector()->GetName();
// FOR CHECKING
std::cout << "fromVolume: " << fromVolume
<< " Particle: " << ParticleName
<< " Process: " << StepProcessName
<< " Energy: " << EkinUnits
<< " Angle: " << thetaRad/degree << " deg." << std::endl;
}
}
}
Thank you for your time.
Peter

Related

Addressing to a LUT with IEEE-754 format

When a LUT is used is common get the address for it via bitwise operations over the bits of a
number in fixed point. An example:
// The LUT has 8 addres: from 0 to 7 with a step of 1.
// The input number, x, is in u4.8 format
// Input number is 1.748 --> Fixed representation is 447: 0001.10111111
address = bits[1:4] + bits[4] // := 2; Returned value: 2
// Input number is 4.69 --> Fixed representation is 1201: 0100.10110001
address = bits[1:4] + bits[4] // := 5; Returned value: 5
// Input number is 7.22 --> Fixed representation is 1848: 0111.00111000
address = bits[1:4] + bits[4] // := 7; Returned value: 7
Ok, we suppose now that the LUT has 16 stored values: from 0 to 7.5 with a step of 0.5. An example:
// The LUT has 16 addres: from 0 to 7.5 with a step of 0.5.
// The input number, x, is in u4.8 format
// Input number is 1.748 --> Fixed representation is 447: 0001.10111111
address = bits[1:5] + bits[5] // := 3; Returned value: 1.5
// Input number is 4.69 --> Fixed representation is 1201: 0100.10110001
address = bits[1:5] + bits[5] // := 9; Returned value: 4.5
// Input number is 7.22 --> Fixed representation is 1848: 0111.00111000
address = bits[1:5] + bits[5] // := 14; Returned value: 7
The example is only to ilustrate that the goal is obtain the adress that corresponds with the nearest value to the input value based on the step. I achieved this in fixed point with a probability matching greater than 99 % in all the tests.
But, the question is: how to implement it in fp32 (IEEE-754)? Because representation in fp32 doesn't have and integer and fractional part, I have no idea how to implement it...
EDIT: FIRST APPROACH
As remarked #njuffa in the commentaries, the way to address a LUT with the IEEE-754 standard is using the MSB's from the mantissa. This bits contains the address and is neccessary get a specific range of bits always less or equal equal to the address length. I have calculated the number of bits necessaries take it into consideration the bits of the exponent. E.g If the LUT has a step of 1/256, the way that I have resolved the address is:
// To normalize the exponent
exponent = exponent - 127;
// msb indicates the number of bits to get from
// from the mantissa. This bits are the MSB. I have
// checked for large LUTS: 2¹⁸ stored values and always
// works well :)
// The step is 1/256: np.log2(step) = 8 --> The number of
// bits in the step!
msb = int(np.log2(step) - exp)
// And finally, get the bits from the mantissa
address = mantissa[31:msb]
Finally, an addition is necessary if rounding to the nearest is required, but if is it use table interpolation rounding to the nearest, is not neccesary.
I have perceived thtat when the input value is close to zero, sometimes the address is incorrect. Always is a difference of one with the reference test.
This way to resolve the address is right only if the step in the lut is a power of 2. For example: if the step of the LUT was pi/(4*512) with a range from 0 to pi/4, the total values stored in LUT would be 512, but with a step scalated by pi, so will be necessary perform a division as far as I know.
This is the test that I have performed to verify that the address is correct.
import numpy as np
import struct
# if mode == 0: No nearest rounding for address
# if mode == 1: Nearest rounding for address
MODE = 0
step = 256 # Step size in the LUT
max_value = 4 # Only for test purposes
LUT = np.arange(0, max_value, 1/step)
# Reference test
def ref_addressing(x):
if MODE == 0:
ref_address = (np.floor(x * step)).astype(np.int32)
elif MODE == 1:
ref_address = (np.round(x * step)).astype(np.int32)
return ref_address
# Test
def test_addressing(x):
j = 0
test_address = np.zeros(len(x), dtype=np.int32)
for x_ in x:
test_address[j] = ieee754_address(x_)
j = j + 1
return test_address
# Convert value to IEEE-754 Standard
def float_to_bin(num):
bits, = struct.unpack('!I', struct.pack('!f', num))
return "{:032b}".format(bits)
# Resolves the address with the MSB's bits of the mantissa
def ieee754_address(x):
ieee754 = float_to_bin(x) # Get the number in IEEE754 Standard
exp = 127 - int(ieee754[1:9], 2) # Calculte the exponent
mnt = ieee754[9::] # Get the mantissa
# How many bits are needed for succesfull addresing?
# np.log2(step): Maximun number of bits in the step size
# Obviously, these values are fixed in the hardware
# implementation. Only for testing.
msb = int(np.log2(step) - exp)
# Get the address. Don't forget the hidden bit!
address = int('1'+mnt[0:msb], 2)
# If rounding to the nearest, MODE == 1
if MODE == 1:
# int(mnt[msb], 2) --> Rounding bit
address = address + int(mnt[msb], 2)
# Negatives address if exp > int(np.log2(step)
if exp > int(np.log2(step):
address = 0
return address
# Uniform random values to check all the range
r = np.random.uniform(0.0, max_value, 65536)
# Perform the tests
ref_address = ref_addressing(r)
test_address = test_addressing(r)
# Statistics and report
diffs = ref_address - test_address
errors = len(np.where(diffs != 0)[0])
p = (1 - (errors / len(r))) * 100
print("-----------------------------------")
print("Total test's samples : %i " % (len(r)))
print("-----------------------------------")
print("Equal addrressing : %r " % (np.array_equal(ref_address, test_address)))
print("Errors : %i " % errors)
print("Probability matching : %0.3f %% " % p)
print("-----------------------------------")
And the sample of an one test (shows False when one or more address are not right):
-----------------------------------
Total test's samples : 65536
-----------------------------------
Equal addrressing : False
Errors : 2
Probability matching : 99.997 %
-----------------------------------

3D Object Oriented Bounding Box using PCA

I am trying to compute the object oriented bounding box for a set of points. I'm using c++ and the Eigen linear algebra library.
I have been using two blog posts as guides yet still my bounding boxes are incorrect (see images).
blog post 1
blog post 2
I hope my commented code is clear of my attempt but the gist of the algorithm is to use PCA to find the basis vectors for the object oriented coordinate frame.
To then project all points into the new frame, find the min and max points that define the box, then project these point into the original coordinate frame and render them.
I can successfully render a box but is isn't a bounding box and appears to be aligned to the normal x,y,z axis. This is clear in the first image for each of the two objects shown.
Any help would be really appreciated. Thanks in advance.
// iglVertices is a X by 3 Eigen::::MatrixXf
// Covariance matrix and eigen decomposition
Eigen::MatrixXf centered = iglVertices.rowwise() - iglVertices.colwise().mean();
Eigen::MatrixXf cov = centered.adjoint() * centered;
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> eig(cov);
//Setup homogenous tranformation to act as new basis functions for new coordinate frame
auto basis = Eigen::Matrix4f(eig.eigenvectors().colwise().homogeneous().rowwise().homogeneous());
basis.row(3) = Eigen::Vector4f::Zero();
basis.col(3) = Eigen::Vector4f::Zero();
basis(3,3) = 1.0f;
std::cout << "eig.eigenvectors() " << eig.eigenvectors() << std::endl;
std::cout << "Basis " << basis << std::endl;
//invert matrix and and transform points into new coordinate frame
auto invBasis = basis.inverse();
auto newVertices = invBasis * iglVertices.rowwise().homogeneous().transpose();
//Find max and min for all of the new axis
auto maxP = newVertices.rowwise().maxCoeff();
auto minP = newVertices.rowwise().minCoeff();
std::cout << "max " << maxP << std::endl;
std::cout << "min " << minP << std::endl;
//Find center and half extent in new coordinate frame
auto center = Eigen::Vector4f((maxP + minP) / 2.0);
auto half_extent = Eigen::Vector4f((maxP - minP) / 2.0);
auto t = Eigen::Vector4f((basis * center));
std::cout << "t " << t << std::endl;
//Update basis function with the translation between two coordinate origins
//I don't actually understand why I need this and have tried without it but still my bounding
//box is wrong
basis.col(3) = Eigen::Vector4f(t[0], t[1], t[2], t[3]);
std::cout << "Basis complete " << basis << std::endl;
std::cout << "center " << center << std::endl;
std::cout << "half_extent " << half_extent << std::endl;
//This is the same as the previous minP/maxP but thought i should try this as
// box is paramaterised with center and half-extent
auto max = center + half_extent;
auto min = center - half_extent;
//Transform back into the original coordinates
auto minNormalBasis = (basis * min).hnormalized();
auto maxNormalBasis = (basis * max).hnormalized();
std::cout << "min new coord" << min << std::endl;
std::cout << "max new coord"<< max << std::endl;
std::cout << "min old coord" << minNormalBasis << std::endl;
std::cout << "max old coord"<< maxNormalBasis << std::endl;
//Extract min and max
auto min_x = minNormalBasis[0];
auto min_y = minNormalBasis[1];
auto min_z = minNormalBasis[2];
auto max_x = maxNormalBasis[0];
auto max_y = maxNormalBasis[1];
auto max_z = maxNormalBasis[2];
bBox.clear();
//Build box for rendering
//Ordering specific to the faces I have manually generated
bBox.push_back(trimesh::point(min_x, min_y, min_z));
bBox.push_back(trimesh::point(min_x, max_y, min_z));
bBox.push_back(trimesh::point(min_x, min_y, max_z));
bBox.push_back(trimesh::point(min_x, max_y, max_z));
bBox.push_back(trimesh::point(max_x, min_y, max_z));
bBox.push_back(trimesh::point(max_x, max_y, max_z));
bBox.push_back(trimesh::point(max_x, min_y, min_z));
bBox.push_back(trimesh::point(max_x, max_y, min_z));
The print output for the spray bottle example is
eig.eigenvectors() 0 -0.999992 -0.00411613
-0.707107 -0.00291054 0.707101
0.707107 -0.00291054 0.707101
Basis 0 -0.999992 -0.00411613 0
-0.707107 -0.00291054 0.707101 0
0.707107 -0.00291054 0.707101 0
0 0 0 1
max 2.98023e-08
0.216833
0.582629
1
min -2.98023e-08
-0.215
-0.832446
1
t -0.000402254
-0.0883253
-0.0883253
1
Basis complete 0 -0.999992 -0.00411613 -0.000402254
-0.707107 -0.00291054 0.707101 -0.0883253
0.707107 -0.00291054 0.707101 -0.0883253
0 0 0 1
center 0
0.000916399
-0.124908
1
half_extent 2.98023e-08
0.215916
0.707537
0
min new coord-2.98023e-08
-0.215
-0.832446
1
max new coord2.98023e-08
0.216833
0.582629
1
min old coord 0.218022
-0.676322
-0.676322
max old coord-0.219631
0.323021
0.323021
You have to compute the 8 corners of an axis aligned box within the PCA frame, and then apply the rotation to them:
bBox.push_back(eig.eigenvectors() * Vector3f(minP.x(), minP.y(), minP.z()));
bBox.push_back(eig.eigenvectors() * Vector3f(minP.x(), maxP.y(), minP.z()));
bBox.push_back(eig.eigenvectors() * Vector3f(minP.x(), minP.y(), maxP.z()));
bBox.push_back(eig.eigenvectors() * Vector3f(minP.x(), maxP.y(), maxP.z()));
...
and you can also directly compute newVertices as:
Matrix<float,3,Dynamic> newVertices = eig.eigenvectors().transpose() * iglVertices.transpose();
After these changes, your code will be reduced by half ;)
And more importantly, please avoid the use of the auto keyword unless you know what you are doing. In your example, most of its usage is very bad practice, not to say wrong. Please read this page.

Taking mean of images for background subtraction - incorrect results

When I try to sum up N previous frames stored in a list and then dividing by num frames, the background model produced is not as expected. I can tell because I've tried the algo in Matlab earlier on the same video.
class TemporalMeanFilter {
private:
int learningCount;
list<Mat> history;
int height, width;
Mat buildModel(){
if(history.size() == 0)
return Mat();
Mat image_avg(height, width, CV_8U, Scalar(0));
double alpha = (1.0/history.size());
list<Mat>::iterator it = history.begin();
cout << "History size: " << history.size() << " Weight per cell: " << alpha << endl;
while(it != history.end()){
image_avg += (*it * alpha);
it++;
}
return image_avg;
}
public:
TemporalMeanFilter(int height, int width, int learningCount){
this->learningCount = learningCount;
this->height = height;
this->width = width;
}
void applyFrameDifference(Mat& frame_diff, Mat& bg_model, Mat& input_frame){
if(history.size() == learningCount)
history.pop_front();
history.push_back(input_frame);
bg_model = buildModel();
frame_diff = bg_model - input_frame;
}
};
//The main looks like this
// ... reading video from file
TemporalMeanFilter meanFilter(height, width, 50); //background subtraction algorithm
meanFilter.applyFrameDifference(diff_frame, bg_model, curr_frame);
//... displaying on screen ... prog ends
Image:
http://imagegur.com/images/untitled.png
The left one is the bg_model, the middle is the curr_frame, and the right one is the output.
Maybe it's because of the rounding off done on CV_U8? I tried changing to CV_32FC1, but then the program just crashed because for some reason it couldn't add two CV_32FC1 Matrices.
Any insight would be greatly appreciated. Thanks!
More Info:
Inside the class, I now keep the average in a CV_16UC1 Mat to prevent clipping, how it results in an error after successive addition.
The add function / operator + both change the type of result from CV_16UC1 to CV8UC1. This error is caused by that. Any suggestion how to ask it preserve the original datatype? (PS: I asked politely... didn't work)
background_model += *it;
OpenCV Error: Bad argument (When the input arrays in add/subtract/multiply/divid
e functions have different types, the output array type must be explicitly speci
fied) in unknown function, file C:\buildslave64\win64_amdocl\2_4_PackSlave-win32
-vc11-shared\opencv\modules\core\src\arithm.cpp, line 1313
You're right that it's almost certainly the rounding errors you get by accumulating scaled greyscale values. There's no reason why it should crash using floating point pixels though, so you should try something like this:
Mat buildModel()
{
if (history.size() == 0)
return Mat();
Mat image_avg = Mat::zeros(height, width, CV_32FC1);
double alpha = (1.0 / history.size());
list<Mat>::iterator it = history.begin();
while (it != history.end())
{
Mat image_temp;
(*it).convertTo(image_temp, CV_32FC1);
image_avg += image_temp;
it++;
}
image_avg *= alpha;
return image_avg;
}
Depending on what you want to do with the result, you may need to normalize it or rescale it or convert back to greyscale before display etc.

Precision issues with Segment Segment intersection code

I am debugging an algorithm I wrote that converts a complex, self-intersecting polygon into a simple polygon by means of identifying all intersection points, and walking around the outside perimeter.
I wrote a series of random data generating stress tests, and in this one, I encountered an interesting situation that caused my algorithm to fail, after working correctly thousands upon thousands of times.
double fRand(double fMin, double fMax)
{
double f = (double)rand() / RAND_MAX; // On my machine RAND_MAX = 2147483647
return fMin + f * (fMax - fMin);
}
// ... testing code below:
srand(1);
for(int j=3;j<5000;j+=5) {
std::vector<E_Point> geometry;
for(int i=0;i<j;i++) {
double radius = fRand(0.6,1.0);
double angle = fRand(0,2*3.1415926535);
E_Point pt(cos(angle),sin(angle));
pt *= radius;
geometry.push_back(pt); // sending in a pile of shit
}
// run algorithm on this geometry
That was an overview of how this is relevant and how I got to where I am now. There are a lot more details I'm leaving out.
What I have been able to do is narrow the issue down to the segment-segment intersection code i am using:
bool intersect(const E_Point& a0, const E_Point& a1,
const E_Point& b0, const E_Point& b1,
E_Point& intersectionPoint) {
if (a0 == b0 || a0 == b1 || a1 == b0 || a1 == b1) return false;
double x1 = a0.x; double y1 = a0.y;
double x2 = a1.x; double y2 = a1.y;
double x3 = b0.x; double y3 = b0.y;
double x4 = b1.x; double y4 = b1.y;
//AABB early exit
if (b2Max(x1,x2) < b2Min(x3,x4) || b2Max(x3,x4) < b2Min(x1,x2) ) return false;
if (b2Max(y1,y2) < b2Min(y3,y4) || b2Max(y3,y4) < b2Min(y1,y2) ) return false;
float ua = ((x4 - x3) * (y1 - y3) - (y4 - y3) * (x1 - x3));
float ub = ((x2 - x1) * (y1 - y3) - (y2 - y1) * (x1 - x3));
float denom = (y4 - y3) * (x2 - x1) - (x4 - x3) * (y2 - y1);
// check against epsilon (lowest normalized double value)
if (fabs(denom) < DBL_EPSILON) {
//Lines are too close to parallel to call
return false;
}
ua /= denom;
ub /= denom;
if ((0 < ua) && (ua < 1) && (0 < ub) && (ub < 1)) {
intersectionPoint.x = (x1 + ua * (x2 - x1));
intersectionPoint.y = (y1 + ua * (y2 - y1));
return true;
}
return false;
}
What's happening is that I have two intersections that this function returns the exact same intersection point value for. Here is a very zoomed in view of the relevant geometry:
The vertical line is defined by the points
(0.3871953044519425, -0.91857980824611341), (0.36139704793723609, 0.91605957361605106)
the green nearly-horizontal line by the points (0.8208980020500205, 0.52853407296583088), (0.36178501611208552, 0.88880385168617226)
and the white line by (0.36178501611208552, 0.88880385168617226), (-0.43211245441046209, 0.68034202227710472)
As you can see the last two lines do share a common point.
My function gives me a solution of (0.36178033094571277, 0.88880245640159794)
for BOTH of these intersections (one of which you see in the picture as a red dot).
The reason why this is a big problem, is that my perimeter algorithm depends on sorting the intersection points on each edge. Since both of these intersection points were calculated to have the same exact value, the sort put them in the wrong orientation. The path of the perimeter comes from above, and followed the white line left, rather than the green line left, meaning that I was no longer following the outside perimeter of my polygon shape.
To fix this problem, there are probably a lot of things I can do but I do not want to search through all of the intersection lists to check against other points to see if the positions compare equal. A better solution would be to try to increase the accuracy of my intersection function.
So what I'm asking is, why is the solution point so inaccurate? Is it because one of the lines is almost vertical? Should I be performing some kind of transformation first? In both cases the almost-vertical line was passed as a0 and a1.
Update: Hey, look at this:
TEST(intersection_precision_test) {
E_Point problem[] = {
{0.3871953044519425, -0.91857980824611341}, // 1559
{0.36139704793723609, 0.91605957361605106}, // 1560
{-0.8208980020500205, 0.52853407296583088}, // 1798
{0.36178501611208552, 0.88880385168617226}, // 1799
{-0.43211245441046209, 0.6803420222771047} // 1800
};
std::cout.precision(16);
E_Point result;
intersect(problem[0],problem[1],problem[2],problem[3],result);
std::cout << "1: " << result << std::endl;
intersect(problem[0],problem[1],problem[3],problem[2],result);
std::cout << "2: " << result << std::endl;
intersect(problem[1],problem[0],problem[2],problem[3],result);
std::cout << "3: " << result << std::endl;
intersect(problem[1],problem[0],problem[3],problem[2],result);
std::cout << "4: " << result << std::endl;
intersect(problem[2],problem[3],problem[0],problem[1],result);
std::cout << "rev: " << result << std::endl;
intersect(problem[3],problem[2],problem[0],problem[1],result);
std::cout << "revf1: " << result << std::endl;
intersect(problem[2],problem[3],problem[1],problem[0],result);
std::cout << "revf2: " << result << std::endl;
intersect(problem[3],problem[2],problem[1],problem[0],result);
std::cout << "revfboth: " << result << std::endl;
}
Output:
Starting Test intersection_precision_test, at Polygon.cpp:1830
1: <0.3617803309457128,0.8888024564015979>
2: <0.3617803309457128,0.8888024564015979>
3: <0.3617803314022162,0.8888024239374175>
4: <0.3617803314022162,0.8888024239374175>
rev: <0.3617803635476076,0.8888024344185281>
revf1: <0.3617803313928456,0.8888024246235207>
revf2: <0.3617803635476076,0.8888024344185281>
revfboth: <0.3617803313928456,0.8888024246235207>
Am I actually running out of mantissa bits or can I do significantly better with a smarter algorithm?
The problem here is that there's no easy way for me to determine when a vertex is set really really close to another line. I wouldn't mind moving it or even completely nuking it because neither of those will screw up my sort!
If you change your float intermediates (ua, ub, denom) to doubles and print the ua values (after division), you'll get these:
0x1.f864ab6b36458p-1 in the first case
0x1.f864af01f2037p-1 in the second case
I've printed them in hex to make it easy to look at the bits. These two values agree in the first 22 bits (1.f864a plus the high bit of b and f). A float only has 23 bits of significand! It's no surprise that if you compute your intermediates in floats, they get rounded to the same answer.
In this case, you can perhaps work around the problem by computing your intermediates using doubles instead of floats. (I made my point struct use doubles for x and y. If you're using floats, I don't know if computing the intermediates in doubles will help.)
However, a case where the vertical segment passes even closer to the intersection of the two horizontal segments might still require even more precision than a double can provide.
What would you do if the vertical segment passed exactly through the shared endpoint of the horizontal segments? I assume you handle this case correctly.
If you look at the ub values (after division), computed with doubles, you get these:
0.9999960389052315
5.904388076838819e-06
This means that the intersection is very, very close to the shared endpoint of the horizontal segments.
So here's what I think you can do. Each time you compute an intersection, look at ub. If it's sufficiently close to 1, move the endpoint to be the intersection point. Then treat the intersection as if it had been an exact pass-through case in the first place. You don't really have to change the data, but you have to treat the endpoint as moved for both of the segments that share the endpoint, which means the current intersection test and the test on the next line segment.

changing pixel values through direct access + opencv

Is there any problem in this code where I am just trying to subtract the pixel values of images through direct access of pixels ..... Am assuming that the images are of same height and width ... Whenever I run the program I am getting completely black picture.....
IplImage * img3 = cvCreateImage(cvSize(img1->height,img1->width),IPL_DEPTH_32F,3);
// img2 and img1 both are IplImage pointers
cvZero(img3);
long value;
for ( int row = 0 ; row < img2->height * img2->width ; row ++ ){
value = &((uchar*)(img1->imageData))[row] - &((uchar*)(img2->imageData))[row] ;
img3->imageData[row] = value;
1) img2->height * img2->width calculate as constant before loop
2)
I dont understand this line
&((uchar*)(img1->imageData))[row] - &((uchar*)(img2->imageData))[row] - are you subtracting pointer from another pointer? Why?
value = img1->imageData[row] - img2->imageData[row]; should do the trick
3) you can not subtract RGB values by subtracting pixel values (if that is your goal)
4) if img3->imageData is *char, then you should multiply row * 4.

Resources