I am using transient graphic to draw a circle but its diameters seems on screen greater than normal
I tried to control ratio of the current screen size but I couldn't
ViewTableRecord view =new ViewTableRecord();
view.CenterPoint = min2d + ((max2d - min2d) / 2.0);
view.Height= max2d.Y - min2d.Y;
view.Width = max2d.X - min2d.X;
ed.SetCurrentView(view);
double dViewRatio= (view.Width / view.Height);
Point3d pn = db.Extmin;
Point3d px = db.Extmax;
Extents3d ents;
using (Line acLine = new Line(pn, px))
{
ents = new Extents3d(acLine.Bounds.Value.MinPoint,
acLine.Bounds.Value.MaxPoint);
}
double dWidth = ents.MaxPoint.X - ents.MinPoint.X;
double dHeight = ents.MaxPoint.Y - ents.MinPoint.Y;
Related
I am following this course to learn computer graphics and write my first ray tracer.
I already have some visible results, but they seem to be too large.
The overall algorithm the course outlines is this:
Image Raytrace (Camera cam, Scene scene, int width, int height)
{
Image image = new Image (width, height) ;
for (int i = 0 ; i < height ; i++)
for (int j = 0 ; j < width ; j++) {
Ray ray = RayThruPixel (cam, i, j) ;
Intersection hit = Intersect (ray, scene) ;
image[i][j] = FindColor (hit) ;
}
return image ;
}
I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.
The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.
The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4).
This is what the square is expected to look like:
Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:
Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
// two roots
float sqrtD = glm::sqrt(D);
float root1 = (-b + sqrtD) / (2 * a);
float root2 = (-b - sqrtD) / (2 * a);
if (root1 > 0 && root2 > 0) {
t = glm::min(root1, root2);
found = true;
}
else if (root2 < 0 && root1 >= 0) {
t = root1;
found = true;
}
else {
// should not happen, implies sthat both roots are negative
}
}
else if (D == 0) {
// one root
float root = -b / (2 * a);
t = root;
found = true;
}
else if (D < 0) {
// no roots
// continue;
}
if (found) {
hitVector = p0 + p1 * t;
hitNormal = glm::normalize(result->hitVector - center2);
}
Here I generate the ray going through the relevant pixel:
Ray* RayThruPixel(Camera* camera, int x, int y) {
const vec3 a = eye - center;
const vec3 b = up;
const vec3 w = glm::normalize(a);
const vec3 u = glm::normalize(glm::cross(b, w));
const vec3 v = glm::cross(w, u);
const float aspect = ((float)width) / height;
float fovyrad = glm::radians(camera->fovy);
const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */ glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}
And intersection with a triangle:
Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f));
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));
vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
// no intersection because ray parallel to plane
}
else {
float t = -(glm::dot(N, ray->origin) + D) / m;
if (t < 0) {
// no intersection because ray goes away from triange plane
}
vec3 Phit = ray->origin + t * ray->direction;
vec3 edge1 = vertex2 - vertex1;
vec3 edge2 = vertex3 - vertex2;
vec3 edge3 = vertex1 - vertex3;
vec3 c1 = Phit - vertex1;
vec3 c2 = Phit - vertex2;
vec3 c3 = Phit - vertex3;
if (glm::dot(N, glm::cross(edge1, c1)) > 0
&& glm::dot(N, glm::cross(edge2, c2)) > 0
&& glm::dot(N, glm::cross(edge3, c3)) > 0) {
found = true;
hitVector = Phit;
hitNormal = N;
}
}
Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?
I eventually figured it out by myself. I first noticed the problem was here:
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)),
/* direction= */ glm::normalize(vec3( modelview *
glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.
Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.
I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.
I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.
I am working on implementing Alchemy AO and reading through their paper and they mention to sample each point by: considering a
Disk of radius r and center C that is parallel to the image plane,
select a screen-space point Q uniformly at random on it's project, and
then read a depth or position buffer to find the camera-space scene
point P = (xp, yp, z(Q)) on that ray.
I am wondering how you would go about selecting a screen-space point in the manor? I have made an attempt below but since my result appears quite incorrect, I think it's the wrong approach.
vec3 Position = depthToPosition(uvCoords);
int turns = 16;
float screen_radius = (sampleRadius * 100.0 / Position.z); //ball around the point
const float disk = (2.0 * PI) / turns;
ivec2 px = ivec2(gl_FragCoord.xy);
float phi = (30u * px.x ^ px.y + 10u * px.x * px.y); //per pixel hash for randdom rotation angle at a pixel
for (int i = 0; i < samples; ++i)
{
float theta = disk * float(i+1) + phi;
vec2 samplepoint = vec2(cos(theta), sin(theta));
}
I am struggling to bind some text specified coordinates so that when I resize the window the text follows suit. Here is the portion of my code:
for (int i = 0; i < petrolStations.size() / 2; i++) {
int j = i + 1;
Text text1 = new Text(petrolStations.get(i), petrolStations.get(j), "1");
text1.setFont(Font.font("Courier", FontWeight.BOLD, FontPosture.ITALIC, 10));
text1.xProperty().bind(pane.widthProperty().divide(2));
text1.yProperty().bind(pane.heightProperty().divide(2));
pane.getChildren().add(text1);
To explain: petrolStations is an array of coordinates that are used to place a letter 1 on the page.
Here is the current output, as you can see all the 1's are combining in the middle rather than being in their specified coordinates.
EDIT:
I've changed the 1's to circles and managed to scale up the size but I still have the same problem, since all the coordinates are under 100 they sit up the top left, I need them to encompass the whole window and expand and separate as the window is resized larger.
for (int i = 0; i < petrolStations.size() / 2; i++) {
int j = i + 1;
Circle circle1 = new Circle();
circle1.setCenterX(petrolStations.get(i));
circle1.setCenterY(petrolStations.get(j));
circle1.setRadius(1);
circle1.setStroke(Color.BLACK);
circle1.setFill(Color.WHITE);
circle1.setScaleX(3);
circle1.setScaleY(3);
pane.getChildren().add(circle1);
}
http://i.imgur.com/JkV3LiW.png
Why not take the ratio of x and y with respect to the width/height? Take a look at this:
Pane pane = new Pane();
Text text = new Text(250,250,"1");
pane.getChildren().add(text);
double width = 150;
double height = 150;
Scene scene = new Scene(pane, width, height);
double x = 50;
double y = 50;
double xRatio = x / width; //GET THE RATIO
double yRatio = y / width; //GET THE RATIO
text.xProperty().bind(pane.widthProperty().multiply(xRatio));
text.yProperty().bind(pane.heightProperty().multiply(yRatio));
stage.setTitle("Hello World!");
stage.setScene(scene);
stage.show();
When I run this code, the initial state is:
Upon resizing:
I am trying to combine Vuforia SDK and jMonkeyEngine. The cube is placed on the target (ImageTarget) so far. But when I move the camera the cube moves a little bit too. I want that the cube remains at the center of the target (like the teapot in VuforiaSamples ImageTarget). Do you have an idea how I can solve the problem?
I think this is the relevant code:
public void initForegroundCamera()
{
foregroundCamera = new Camera(settings.getWidth(), settings.getHeight());
foregroundCamera.setLocation(new Vector3f(0.0f, 0.0f, 0.0f));
// Get perspective transformation
CameraCalibration cameraCalibration = CameraDevice.getInstance().getCameraCalibration();
VideoBackgroundConfig config = Renderer.getInstance().getVideoBackgroundConfig();
float viewportWidth = config.getSize().getData()[0];
float viewportHeight = config.getSize().getData()[1];
float cameraWidth = cameraCalibration.getSize().getData()[0];
float cameraHeight = cameraCalibration.getSize().getData()[1];
float screenWidth = settings.getWidth();
float screenHeight = settings.getHeight();
Vec2F size = new Vec2F(cameraWidth, cameraHeight);
Vec2F focalLength = cameraCalibration.getFocalLength();
float fovRadians = 2 * (float) Math.atan(0.5f * (size.getData()[1] / focalLength.getData()[1]));
float fovDegrees = fovRadians * 180.0f / (float) Math.PI;
float aspectRatio = (size.getData()[0] / size.getData()[1]);
// Adjust for screen / camera size distortion
float viewportDistort = 1.0f;
if (viewportWidth != screenWidth)
{
viewportDistort = viewportWidth / screenWidth;
fovDegrees = fovDegrees * viewportDistort;
aspectRatio = aspectRatio / viewportDistort;
Log.v(TAG, "viewportDistort: " + viewportDistort + " fovDegreed: " + fovDegrees + " aspectRatio: " + aspectRatio);
}
if (viewportHeight != screenHeight)
{
viewportDistort = viewportHeight / screenHeight;
fovDegrees = fovDegrees / viewportDistort;
aspectRatio = aspectRatio * viewportDistort;
Log.v(TAG, "viewportDistort: " + viewportDistort + " fovDegreed: " + fovDegrees + " aspectRatio: " + aspectRatio);
}
setCameraPerspectiveFromVuforia(fovDegrees, aspectRatio);
setCameraViewportFromVuforia(viewportWidth, viewportHeight, cameraWidth, cameraHeight);
ViewPort foregroundViewPort = renderManager.createMainView("ForegroundView", foregroundCamera);
foregroundViewPort.attachScene(rootNode);
foregroundViewPort.setClearFlags(false, true, false);
foregroundViewPort.setBackgroundColor(ColorRGBA.Blue);
sceneInitialized = true;
}
private void ProcessTrackable(TrackableResult result, int i)
{
// Show the 3D object corresponding on the found trackable
Spatial model = rootNode.getChild(0);
model.setCullHint(CullHint.Dynamic);
Matrix44F modelViewMatrix_Vuforia = Tool.convertPose2GLMatrix(result.getPose());
Matrix44F inverseMatrix_Vuforia = MathHelpers.Matrix44FInverse(modelViewMatrix_Vuforia);
Matrix44F inverseTransposedMatrix_Vuforia = MathHelpers.Matrix44FTranspose(inverseMatrix_Vuforia);
float[] modelViewMatrix = inverseTransposedMatrix_Vuforia.getData();
// Get camera position
float cam_x = modelViewMatrix[12];
float cam_y = modelViewMatrix[13];
float cam_z = modelViewMatrix[14];
// Get camera rotation
float cam_right_x = modelViewMatrix[0];
float cam_right_y = modelViewMatrix[1];
float cam_right_z = modelViewMatrix[2];
float cam_up_x = modelViewMatrix[4];
float cam_up_y = modelViewMatrix[5];
float cam_up_z = modelViewMatrix[6];
float cam_dir_x = modelViewMatrix[8];
float cam_dir_y = modelViewMatrix[9];
float cam_dir_z = modelViewMatrix[10];
setCameraPoseFromVuforia(cam_x, cam_y, cam_z);
setCameraOrientationFromVuforia(cam_right_x, cam_right_y, cam_right_z, cam_up_x, cam_up_y, cam_up_z, cam_dir_x, cam_dir_y, cam_dir_z);
}
//we modify the left axis of the JME camera to match the coodindate system used by Vuforia
private void setCameraPerspectiveFromVuforia(float fovY, float aspectRatio)
{
foregroundCamera.setFrustumPerspective(fovY, aspectRatio, 1.0f, 1000.0f);
foregroundCamera.update();
}
private void setCameraPoseFromVuforia(float camX, float camY, float camZ)
{
foregroundCamera.setLocation(new Vector3f(camX, camY, camZ));
foregroundCamera.update();
}
private void setCameraOrientationFromVuforia(float camRightX, float camRightY, float camRightZ, float camUpX, float camUpY, float camUpZ, float camDirX, float camDirY, float camDirZ)
{
foregroundCamera.setAxes(new Vector3f(-camRightX, -camRightY, -camRightZ), new Vector3f(-camUpX, -camUpY, -camUpZ), new Vector3f( camDirX, camDirY, camDirZ));
foregroundCamera.update();
}
I have also implemented Vuforia with JMonkey Engine. I must admit that the shaky movement of 3D models is noticeable even when I hold my phone still, whereas using the OpenGl rendering engine with Vuforia alone doesn't produce such results.
Reason might be that what is being rendered on screen is Quad with the texture being output from the camera, and this also takes a while to change every frame. Moreover while running my app the phone gets very hot so I guess it is a heavy load for processor as well.
I am currently working on a simple Silverlight app that will allow people to upload an image, crop, resize and rotate it and then load it via a webservice to a CMS.
Cropping and resizing is done, however rotation is causing some problems. The image gets cropped and is off centre after the rotation.
WriteableBitmap wb = new WriteableBitmap(destWidth, destHeight);
RotateTransform rt = new RotateTransform();
rt.Angle = 90;
rt.CenterX = width/2;
rt.CenterY = height/2;
//Draw to the Writeable Bitmap
Image tempImage2 = new Image();
tempImage2.Width = width;
tempImage2.Height = height;
tempImage2.Source = rawImage;
wb.Render(tempImage2,rt);
wb.Invalidate();
rawImage = wb;
message.Text = "h:" + rawImage.PixelHeight.ToString();
message.Text += ":w:" + rawImage.PixelWidth.ToString();
//Finally set the Image back
MyImage.Source = wb;
MyImage.Width = destWidth;
MyImage.Height = destHeight;
The code above only needs to rotate by 90° at this time so I'm just setting destWidth and destHeight to the height and width of the original image.
It looks like your target image is the same size as your source image. If you want to rotate over 90 degrees, your width and height should be exchanged:
WriteableBitmap wb = new WriteableBitmap(destHeight, destWidth);
Also, if you rotate about the centre of the original image, part of it will end up outside the boundaries. You could either include some translation transforms, or simply rotate the image about a different point:
rt.CenterX = rt.CenterY = Math.Min(width / 2, height / 2);
Try it with a piece of rectangular paper to see why that makes sense.
Many thanks to those above.. they helped a lot. I include here a simple example which includes the additional transform necessary to move the rotated image back to the top left corner of the result.
int width = currentImage.PixelWidth;
int height = currentImage.PixelHeight;
int full = Math.Max(width, height);
Image tempImage2 = new Image();
tempImage2.Width = full;
tempImage2.Height = full;
tempImage2.Source = currentImage;
// New bitmap has swapped width/height
WriteableBitmap wb1 = new WriteableBitmap(height,width);
TransformGroup transformGroup = new TransformGroup();
// Rotate around centre
RotateTransform rotate = new RotateTransform();
rotate.Angle = 90;
rotate.CenterX = full/2;
rotate.CenterY = full/2;
transformGroup.Children.Add(rotate);
// and transform back to top left corner of new image
TranslateTransform translate = new TranslateTransform();
translate.X = -(full - height) / 2;
translate.Y = -(full - width) / 2;
transformGroup.Children.Add(translate);
wb1.Render(tempImage2, transformGroup);
wb1.Invalidate();
If the image isn't square you will get cropping.
I know this won't give you exactly the right result, you'll need to crop it afterwards, but it will create a bitmap big enough in each direction to take the rotated image.
//Draw to the Writeable Bitmap
Image tempImage2 = new Image();
tempImage2.Width = Math.Max(width, height);
tempImage2.Height = Math.Max(width, height);
tempImage2.Source = rawImage;
You need to calculate the scaling based on the rotation of the corners relative to the centre.
If the image is a square only one corner is needed, but for a rectangle you need to check 2 corners in order to see if a vertical or horizontal edge is overlapped. This check is a linear comparison of how much the rectangle's height and width are exceeded.
Click here for the working testbed app created for this answer (image below):
double CalculateConstraintScale(double rotation, int pixelWidth, int pixelHeight)
The pseudo-code is as follows (actual C# code at the end):
Convert rotation angle into Radians
Calculate the "radius" from the rectangle centre to a corner
Convert BR corner position to polar coordinates
Convert BL corner position to polar coordinates
Apply the rotation to both polar coordinates
Convert the new positions back to Cartesian coordinates (ABS value)
Find the largest of the 2 horizontal positions
Find the largest of the 2 vertical positions
Calculate the delta change for horizontal size
Calculate the delta change for vertical size
Return width/2 / x if horizontal change is greater
Return height/2 / y if vertical change is greater
The result is a multiplier that will scale the image down to fit the original rectangle regardless of rotation.
**Note: While it is possible to do much of the maths using matrix operations, there are not enough calculations to warrant that. I also thought it would make a better example from first-principles.*
C# Code:
/// <summary>
/// Calculate the scaling required to fit a rectangle into a rotation of that same rectangle
/// </summary>
/// <param name="rotation">Rotation in degrees</param>
/// <param name="pixelWidth">Width in pixels</param>
/// <param name="pixelHeight">Height in pixels</param>
/// <returns>A scaling value between 1 and 0</returns>
/// <remarks>Released to the public domain 2011 - David Johnston (HiTech Magic Ltd)</remarks>
private double CalculateConstraintScale(double rotation, int pixelWidth, int pixelHeight)
{
// Convert angle to radians for the math lib
double rotationRadians = rotation * PiDiv180;
// Centre is half the width and height
double width = pixelWidth / 2.0;
double height = pixelHeight / 2.0;
double radius = Math.Sqrt(width * width + height * height);
// Convert BR corner into polar coordinates
double angle = Math.Atan(height / width);
// Now create the matching BL corner in polar coordinates
double angle2 = Math.Atan(height / -width);
// Apply the rotation to the points
angle += rotationRadians;
angle2 += rotationRadians;
// Convert back to rectangular coordinate
double x = Math.Abs(radius * Math.Cos(angle));
double y = Math.Abs(radius * Math.Sin(angle));
double x2 = Math.Abs(radius * Math.Cos(angle2));
double y2 = Math.Abs(radius * Math.Sin(angle2));
// Find the largest extents in X & Y
x = Math.Max(x, x2);
y = Math.Max(y, y2);
// Find the largest change (pixel, not ratio)
double deltaX = x - width;
double deltaY = y - height;
// Return the ratio that will bring the largest change into the region
return (deltaX > deltaY) ? width / x : height / y;
}
Example of use:
private WriteableBitmap GenerateConstrainedBitmap(BitmapImage sourceImage, int pixelWidth, int pixelHeight, double rotation)
{
double scale = CalculateConstraintScale(rotation, pixelWidth, pixelHeight);
// Create a transform to render the image rotated and scaled
var transform = new TransformGroup();
var rt = new RotateTransform()
{
Angle = rotation,
CenterX = (pixelWidth / 2.0),
CenterY = (pixelHeight / 2.0)
};
transform.Children.Add(rt);
var st = new ScaleTransform()
{
ScaleX = scale,
ScaleY = scale,
CenterX = (pixelWidth / 2.0),
CenterY = (pixelHeight / 2.0)
};
transform.Children.Add(st);
// Resize to specified target size
var tempImage = new Image()
{
Stretch = Stretch.Fill,
Width = pixelWidth,
Height = pixelHeight,
Source = sourceImage,
};
tempImage.UpdateLayout();
// Render to a writeable bitmap
var writeableBitmap = new WriteableBitmap(pixelWidth, pixelHeight);
writeableBitmap.Render(tempImage, transform);
writeableBitmap.Invalidate();
return writeableBitmap;
}
I released a Test-bed of the code on my website so you can try it for real - click to try it
P.S. Yes this is my answer from another question, duplicated exactly, but the question does require the same answer as that one to be complete.