I have a rotation matrix. How can I get the rotation around a specified axis contained within this matrix?
Edit:
It's a 3D matrix (4x4), and I want to know how far around a predetermined (not contained) axis the matrix rotates. I can already decompose the matrix but D3DX will only give the entire matrix as one rotation around one axis, whereas I need to split the matrix up into angle of rotation around an already-known axis, and the rest.
Sample code and brief problem description:
D3DXMATRIX CameraRotationMatrix;
D3DXVECTOR3 CameraPosition;
//D3DXVECTOR3 CameraRotation;
inline D3DXMATRIX GetRotationMatrix() {
return CameraRotationMatrix;
}
inline void TranslateCamera(float x, float y, float z) {
D3DXVECTOR3 rvec, vec(x, y, z);
#pragma warning(disable : 4238)
D3DXVec3TransformNormal(&rvec, &vec, &GetRotationMatrix());
#pragma warning(default : 4238)
CameraPosition += rvec;
RecomputeVPMatrix();
}
inline void RotateCamera(float x, float y, float z) {
D3DXVECTOR3 RotationRequested(x, y, z);
D3DXVECTOR3 XAxis, YAxis, ZAxis;
D3DXMATRIX rotationx, rotationy, rotationz;
XAxis = D3DXVECTOR3(1, 0, 0);
YAxis = D3DXVECTOR3(0, 1, 0);
ZAxis = D3DXVECTOR3(0, 0, 1);
#pragma warning(disable : 4238)
D3DXVec3TransformNormal(&XAxis, &XAxis, &GetRotationMatrix());
D3DXVec3TransformNormal(&YAxis, &YAxis, &GetRotationMatrix());
D3DXVec3TransformNormal(&ZAxis, &ZAxis, &GetRotationMatrix());
#pragma warning(default : 4238)
D3DXMatrixIdentity(&rotationx);
D3DXMatrixIdentity(&rotationy);
D3DXMatrixIdentity(&rotationz);
D3DXMatrixRotationAxis(&rotationx, &XAxis, RotationRequested.x);
D3DXMatrixRotationAxis(&rotationy, &YAxis, RotationRequested.y);
D3DXMatrixRotationAxis(&rotationz, &ZAxis, RotationRequested.z);
CameraRotationMatrix *= rotationz;
CameraRotationMatrix *= rotationy;
CameraRotationMatrix *= rotationx;
RecomputeVPMatrix();
}
inline void RecomputeVPMatrix() {
D3DXMATRIX ProjectionMatrix;
D3DXMatrixPerspectiveFovLH(
&ProjectionMatrix,
FoV,
(float)D3DDeviceParameters.BackBufferWidth / (float)D3DDeviceParameters.BackBufferHeight,
FarPlane,
NearPlane
);
D3DXVECTOR3 CamLookAt;
D3DXVECTOR3 CamUpVec;
#pragma warning(disable : 4238)
D3DXVec3TransformNormal(&CamLookAt, &D3DXVECTOR3(1, 0, 0), &GetRotationMatrix());
D3DXVec3TransformNormal(&CamUpVec, &D3DXVECTOR3(0, 1, 0), &GetRotationMatrix());
#pragma warning(default : 4238)
D3DXMATRIX ViewMatrix;
#pragma warning(disable : 4238)
D3DXMatrixLookAtLH(&ViewMatrix, &CameraPosition, &(CamLookAt + CameraPosition), &CamUpVec);
#pragma warning(default : 4238)
ViewProjectionMatrix = ViewMatrix * ProjectionMatrix;
D3DVIEWPORT9 vp = {
0,
0,
D3DDeviceParameters.BackBufferWidth,
D3DDeviceParameters.BackBufferHeight,
0,
1
};
D3DDev->SetViewport(&vp);
}
Effectively, after a certain time, when RotateCamera is called, it begins to rotate in the relative X axis- even though constant zero is passed in for that request when responding to mouse input, so I know that when moving the mouse, the camera should not roll at all. I tried spamming 0,0,0 requests and saw no change (one per frame at 1500 frames per second), so I'm fairly sure that I'm not seeing FP error or matrix accumulation error. I tried writing a RotateCameraYZ function and stripping all X-axis from the function. I've spent several days trying to discover why this is the case, and eventually decided on just hacking around it.
So I want to get the rotation around the relative-x axis, transform the CameraRotation matrix, and then check it after to verify that it's the same, and if it isn't, add a correcting matrix.
Just for reference, I've seen some diagrams on Wikipedia, and I actually have a relatively strange axis layout, which is Y axis up, but X axis forwards and Z axis right, so Y axis yaw, Z axis pitch, X axis roll.
See Wikipedia.
Warning: Math ahead
Related
I'm working on an audio visualisation that's basically supposed to be a circular spectrogram. I have a graph that shows the frequency already and an arc, that evolves based on the time passed. Now I would like to fill the arc with white points based on the amplitude of each frequency, much like here: https://vimeo.com/27135957. Apparently, I need to make a PGraphics that is filled with points, which change from white to black based on the amplitude. Then I need to texture the arc with this graphic. Does anyone know how to do this?
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;
Minim minim;
AudioPlayer song;
FFT fft;
PGraphics pg;
PShape arc;
float deg = 90;
float rad = radians(deg);
void setup()
{
size(1000, 1000);
minim = new Minim(this);
song = minim.loadFile("Anthology.mp3");
song.play();
fft = new FFT(song.bufferSize(), song.sampleRate());
pg = createGraphics(width, height);
}
void draw()
{
background(0);
fft.forward(song.mix);
for (int i = 0; i < fft.specSize(); i++)
{
pushMatrix();
stroke(255);
line(i, height, i, height - fft.getBand(i)*0.5);
popMatrix();
println(fft.getBand(i));
//Map Amplitude to 0 → 255, fill with points and color them
float brightness = map(fft.getBand(i), -1, 1, 0, 255);
pg.beginDraw();
pg.endDraw();
fill(255, 255, 255,);
noStroke();
float evolution = radians(map(song.position(), 0, song.length(), 90, 450));
//texture(pg);
arc(height/2, height/2, height-100, height-100, rad, evolution, PIE);
}
}
There are few concepts that appear unclear based on your code:
If you plan to render the arc within the pg PGraphics instance access pg with . notation and call drawing functions in between beginDraw()/endDraw() calls. At the moment there is nothing rendered in pg and pg isn't rendered anywhere using image(). For more details see the createGraphics() reference, run the sample code/ tweak it/ break it/fix it/understand it
Similarly PShape arc is created but is not used
There is a commented attempt to use pg as a texture however the texture mapping is unclear
If using both PGraphics and PShape is confusing you can achieve a similar effect with PGraphics alone: simply render some larger gray dots instead of arcs. It won't be an identical effect but it will have a very similar look with less effort.
Here's a variant based on your code:
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;
Minim minim;
AudioPlayer song;
FFT fft;
PGraphics pg;
void setup()
{
size(600, 600, P2D);
minim = new Minim(this);
song = minim.loadFile("jingle.mp3", 1024);
song.loop();
fft = new FFT(song.bufferSize(), song.sampleRate());
// optional: use logarithmic averages: clover to how we perceive sound
fft.logAverages( 30, 6 );
// setup pg graphics layer disable fill, make points stroke thick
pg = createGraphics(width, height);
pg.beginDraw();
pg.strokeWeight(3);
pg.noFill();
pg.endDraw();
}
void draw()
{
background(0);
image(pg, 0, 0);
// perform FFT on stereo mix
fft.forward(song.mix);
// center coordinates
float cx = width * 0.5;
float cy = height * 0.5;
// count FFT bins
int fftSpecSize = fft.specSize();
// calculate the visual size for representing an FFT bin
float sizePerSpec = (height * 0.5 ) / fftSpecSize;
stroke(255);
noFill();
// start #editing# the pg layer (once
pg.beginDraw();
// start the FFT graph shape
beginShape();
// for each FFT bin
for (int i = 0; i < fftSpecSize; i++)
{
// get the vands in reverse order (low frequencies last)
float fftBand = fft.getBand(fftSpecSize - i - 1);
// scale FFT bin value to pixel/render size
float xOffset = fftBand * 10;
// map FFT bins to 0-255 brightness levels (note 35 may differ
float brightness = map(fftBand, 0, 35, 0, 255);
// draw the line graph vertex
vertex(cx + xOffset, cy + sizePerSpec * i);
// map song position (millis played) to 360 degrees in radians (2 * PI)
// add HALF_PI (90 degrees) because 0 degrees points to the right and drawing should start pointing down (not right)
//float angle = map(song.position(), 0, song.length(), 0, TWO_PI) + HALF_PI;
// as a test map it to a lower value
float angle = (frameCount * 0.0025) + HALF_PI;
// map radius from FFT index
float radius = map(i, 0, fftSpecSize - 1, 0, width * 0.5);
// use mapped brightness as point stroke
pg.stroke(brightness);
// use polar coordinates mapped from the centre
pg.pushMatrix();
pg.translate(cx,cy);
pg.rotate(angle);
pg.point(radius,0);
pg.popMatrix();
// alternatively use polar to cartesian coordinate conversion
// x = cos(angle) * radius
// y = sin((angle) * radius
// cx, cy are added to offset from center
//pg.point(cx + (cos(angle) * radius),
// cy + (sin(angle) * radius));
}
// finish FFT graph line
endShape();
// fnish pg layer
pg.endDraw();
}
Note
you may want to change jingle.mp3 to your audio file name
for the sake of a test with a short track I used an arbitrary mapping of the angle (same as evolution in your code): there is a commented version that takes the track duration into account
the grayscale point position is rendered using coordinate transformations. Be sure to go through the 2D Transformations tutorial and bare in mind the order of transformations is important. Alternatively there is a versioon that does the same using the polar (angle/radius) to cartesian (x,y) coordinate system transformation formula instead.
P.S. I also wondered how to get nice visuals based on FFT data and with a few filtering tricks results can be nice. I recommend also checking out wakjah's answer here.
I have a Three js scene that contains a 100x100 plane centred at the origin (ie. min coord: (-50,-50), max coord: (50,50)). I am trying to have the plane appear as a colour wheel by using the x and z coords in a custom glsl shader. Using this guide (see HSB in polar coordinates, towards the bottom of the page) I have gotten my
Shader Code with Three.js Scene
but it is not quite right.
I have played around tweaking all the variables that make sense to me, but as you can see in the screenshot the colours change twice as often as what they should. My math intuition says just divide the angle by 2 but when I tried that it was completely incorrect.
I know the solution is very simple but I have tried for a couple hours and I haven't got it.
How do I turn my shader that I currently have into one that makes exactly 1 full colour rotation in 2pi radians?
EDIT: here is the relevant shader code in plain text
varying vec3 vColor;
const float PI = 3.1415926535897932384626433832795;
uniform float delta;
uniform float scale;
uniform float size;
vec3 hsb2rgb( in vec3 c ){
vec3 rgb = clamp(abs(mod(c.x*6.0+vec3(0.0,4.0,2.0),
6.0)-3.0)-1.0,
0.0,
1.0 );
rgb = rgb*rgb*(3.0-2.0*rgb);
return c.z * mix( vec3(1.0), rgb, c.y);
}
void main()
{
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
float r = 0.875;
float g = 0.875;
float b = 0.875;
if (worldPosition.y > 0.06 || worldPosition.y < -0.06) {
vec2 toCenter = vec2(0.5) - vec2((worldPosition.z+50.0)/100.0, (worldPosition.x+50.0)/100.0);
float angle = atan(worldPosition.z/worldPosition.x);
float radius = length(toCenter) * 2.0;
vColor = hsb2rgb(vec3((angle/(PI))+0.5,radius,1.0));
} else {
vColor = vec3(r,g,b);
}
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = size * (scale/length(mvPosition.xyz));
gl_Position = projectionMatrix * mvPosition;
}
I have discovered that the guide I was following was incorrect. I wasn't thinking about my math properly but I now know what the problem was.
atan has a range from -PI/2 to PI/2 which only accounts for half of a circle. When worldPosition.x is negative atan will not return the correct angle since it is out of range of the function. The angle needs to be adjusted based on what quadrant it is in the plane.
Q1: do nothing
Q2: add PI to the angle
Q3: add PI to the angle
Q4: add 2PI to the angle
After this normalize the angle (divide by 2PI) then pass it to the hsb2rgb function.
I have a 3D vascular free-hand ultrasound volume containing one vessel, and I am trying to reconstruct the surface of the vessel. The 3D volume is constructed from a stack of 2D images/B-scans, and the contour of the vessel in each B-scan has been segmented; that is, I have an ellipse representing the contour of the vessel in each B-scan in the volume. I have tried to reconstruct the contour of the vessel by following the VTK example of 'GenerateModelsFromLabels.cxx' (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Medical/GenerateModelsFromLabels). However, the result is not a smooth surface from one frame to another as I would have hoped for it to be. It is discontinuous and irregular, and the surface doesn't connect the vessel contours between two adjacent frames in the volume if the displacement between the ellipses is large. In my approach, I basically used DiscreteMarchingCubes -> WindowedSincPolyDataFilter -> GeometryFilter.
I played around with the passband, smoothingIterations and featureAngle parameters, and I was able to obtain the best following result:
As you can see, it is not a smooth continuous surface with a lot of uninterpolated "holes" between adjacent frames, but it is all right. Can it be made better? I also tried using a 3D Delaunay triangulation, but it only gave me the convex hull, which is not the output I expected. I would like to know if there is a better approach towards reconstructing a surface that closely follows the contour of the vessel from one B-scan to the next in a volume?
A minimal working example is shown below:
vtkSmartPointer<vtkImageData> vesselVolume =
vtkSmartPointer<vtkImageData>::New();
int totalImages = 210;
for (int z = 0; z < totalImages; z++)
{
std::string strFile = "E:/datasets/vasc/rendering/contour/" + std::to_string(z + 1) + ".png";
cv::Mat im = cv::imread(strFile, CV_LOAD_IMAGE_GRAYSCALE);
if (z == 0)
{
vesselVolume->SetExtent(0, im.cols, 0, im.rows, 0, totalImages - 1);
vesselVolume->SetSpacing(1, 1, 1);
vesselVolume->SetOrigin(0, 0, 0);
vesselVolume->AllocateScalars(VTK_UNSIGNED_CHAR, 0);
}
std::vector<cv::Point2i> locations; // output, locations of non-zero pixels
cv::findNonZero(im, locations);
for (int nzi = 0; nzi < locations.size(); nzi++)
{
unsigned char* pixel = static_cast<unsigned char*>(vesselVolume->GetScalarPointer(locations[nzi].x, locations[nzi].y, z));
pixel[0] = 255;
}
}
vtkSmartPointer<vtkDiscreteMarchingCubes> discreteCubes =
vtkSmartPointer<vtkDiscreteMarchingCubes>::New();
discreteCubes->SetInputData(vesselVolume);
discreteCubes->GenerateValues(1, 255, 255);
discreteCubes->ComputeNormalsOn();
vtkSmartPointer<vtkWindowedSincPolyDataFilter> smoother =
vtkSmartPointer<vtkWindowedSincPolyDataFilter>::New();
unsigned int smoothingIterations = 10;
double passBand = 2;
double featureAngle = 360.0;
smoother->SetInputConnection(discreteCubes->GetOutputPort());
smoother->SetNumberOfIterations(smoothingIterations);
smoother->BoundarySmoothingOff();
//smoother->FeatureEdgeSmoothingOff();
smoother->FeatureEdgeSmoothingOn();
smoother->SetFeatureAngle(featureAngle);
smoother->SetPassBand(passBand);
smoother->NonManifoldSmoothingOn();
smoother->BoundarySmoothingOn();
smoother->NormalizeCoordinatesOn();
smoother->Update();
vtkSmartPointer<vtkThreshold> selector =
vtkSmartPointer<vtkThreshold>::New();
selector->SetInputConnection(smoother->GetOutputPort());
selector->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_CELLS,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkMaskFields> scalarsOff =
vtkSmartPointer<vtkMaskFields>::New();
// Strip the scalars from the output
scalarsOff->SetInputConnection(selector->GetOutputPort());
scalarsOff->CopyAttributeOff(vtkMaskFields::POINT_DATA,
vtkDataSetAttributes::SCALARS);
scalarsOff->CopyAttributeOff(vtkMaskFields::CELL_DATA,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkGeometryFilter> geometry =
vtkSmartPointer<vtkGeometryFilter>::New();
geometry->SetInputConnection(scalarsOff->GetOutputPort());
geometry->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(geometry->GetOutputPort());
mapper->ScalarVisibilityOff();
mapper->Update();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(renderer);
renderer->SetBackground(.2, .3, .4);
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
renderer->AddActor(actor);
renderer->ResetCamera();
renderWindow->Render();
renderWindowInteractor->Start();
Assuming that your problem is hand shaking between slices, one possible way to improve your result is to apply slice to slice registration. It should be easy to try using ImageJ. Use the transforms between slices to also transform your labeled images. Then run your transformed label images through your current pipeline.
I am working with OpenGL ES 2.0 on an Android device.
I am trying to get a sphere up and running and drawing. Currentley, I almost have a sphere, but clearly it's being done very, very wrong.
In my app, I hold a list of Vector3's, which I convert to a ByteBuffer along the way, and pass to OpenGL.
I know my code is okay, since I have a Cube and Tetrahedron drawing nicley.
What two parts I changed were:
Determing the vertices
Drawing the vertices.
Here are the code snippits in question. What am I doing wrong?
Determining the polar coordinates:
private void ConstructPositionVertices()
{
for (float latitutde = 0.0f; latitutde < (float)(Math.PI * 2.0f); latitutde += 0.1f)
{
for (float longitude = 0.0f; longitude < (float)(2.0f * Math.PI); longitude += 0.1f)
{
mPositionVertices.add(ConvertFromSphericalToCartesian(1.0f, latitutde, longitude));
}
}
}
Converting from Polar to Cartesian:
public static Vector3 ConvertFromSphericalToCartesian(float inLength, float inPhi, float inTheta)
{
float x = inLength * (float)(Math.sin(inPhi) * Math.cos(inTheta));
float y = inLength * (float)(Math.sin(inPhi) * Math.sin(inTheta));
float z = inLength * (float)Math.cos(inTheta);
Vector3 convertedVector = new Vector3(x, y, z);
return convertedVector;
}
Drawing the circle:
inGL.glDrawArrays(GL10.GL_TRIANGLES, 0, numVertices);
Obviously I omitted some code, but I am positive my mistake lies in these snippits somewhere.
I do nothing more with the points than pass them to OpenGL, then call Triangles, which should connect the points for me.. right?
EDIT:
A picture might be nice!
your z must be calculated using phi. float z = inLength * (float)Math.cos(inPhi);
Also,the points generated are not triangles so it would be better to use GL_LINE_STRIP
Using triangle strip on Polar sphere is as easy as drawing points in pairs, for example:
const float GL_PI = 3.141592f;
GLfloat x, y, z, alpha, beta; // Storage for coordinates and angles
GLfloat radius = 60.0f;
const int gradation = 20;
for (alpha = 0.0; alpha < GL_PI; alpha += GL_PI/gradation)
{
glBegin(GL_TRIANGLE_STRIP);
for (beta = 0.0; beta < 2.01*GL_PI; beta += GL_PI/gradation)
{
x = radius*cos(beta)*sin(alpha);
y = radius*sin(beta)*sin(alpha);
z = radius*cos(alpha);
glVertex3f(x, y, z);
x = radius*cos(beta)*sin(alpha + GL_PI/gradation);
y = radius*sin(beta)*sin(alpha + GL_PI/gradation);
z = radius*cos(alpha + GL_PI/gradation);
glVertex3f(x, y, z);
}
glEnd();
}
First point entered is as follows the formula, and the second one is shifted by the single step of alpha angle (from the next parallel).
Lately I implemented the FXAA algorithm into my OpenGL application. I haven't understand this algorithm completely by now but I know that it uses contrast data of the final image to selectively apply blurring. As a post processing effect that makes sense. B since I use deferred shading in my application I already have a depth texture of the scene. Using that it might be much easier and more precise to find edges for applying blur there.
So is there a known antialiasing algorithm using the depth texture instead of the final image to find the edges? By fakes I mean an antialiasing algorithm based on a pixel basis instead of a vertex basis.
After some research I found out that my idea is widely used already in deferred renderers. I decided to post this answer because I came up with my own implementation which I want to share with the community.
Based on the gradient changes of the depth and the angle changes of the normals, there is blurring applied to the pixel.
// GLSL fragment shader
#version 330
in vec2 coord;
out vec4 image;
uniform sampler2D image_tex;
uniform sampler2D position_tex;
uniform sampler2D normal_tex;
uniform vec2 frameBufSize;
void depth(out float value, in vec2 offset)
{
value = texture2D(position_tex, coord + offset / frameBufSize).z / 1000.0f;
}
void normal(out vec3 value, in vec2 offset)
{
value = texture2D(normal_tex, coord + offset / frameBufSize).xyz;
}
void main()
{
// depth
float dc, dn, ds, de, dw;
depth(dc, vec2( 0, 0));
depth(dn, vec2( 0, +1));
depth(ds, vec2( 0, -1));
depth(de, vec2(+1, 0));
depth(dw, vec2(-1, 0));
float dvertical = abs(dc - ((dn + ds) / 2));
float dhorizontal = abs(dc - ((de + dw) / 2));
float damount = 1000 * (dvertical + dhorizontal);
// normals
vec3 nc, nn, ns, ne, nw;
normal(nc, vec2( 0, 0));
normal(nn, vec2( 0, +1));
normal(ns, vec2( 0, -1));
normal(ne, vec2(+1, 0));
normal(nw, vec2(-1, 0));
float nvertical = dot(vec3(1), abs(nc - ((nn + ns) / 2.0)));
float nhorizontal = dot(vec3(1), abs(nc - ((ne + nw) / 2.0)));
float namount = 50 * (nvertical + nhorizontal);
// blur
const int radius = 1;
vec3 blur = vec3(0);
int n = 0;
for(float u = -radius; u <= +radius; ++u)
for(float v = -radius; v <= +radius; ++v)
{
blur += texture2D(image_tex, coord + vec2(u, v) / frameBufSize).rgb;
n++;
}
blur /= n;
// result
float amount = mix(damount, namount, 0.5);
vec3 color = texture2D(image_tex, coord).rgb;
image = vec4(mix(color, blur, min(amount, 0.75)), 1.0);
}
For comparison, this is the scene without any anti-aliasing.
This is the result with anti-aliasing applied.
You may need to view the images at their full resolution to judge the effect. In my view the result is adequate for the simple implementation. The best thing is that there are nearly no jagged artifacts when the camera moves.