Sage: Drawing circles/horocycles in hyperbolic plane - geometry

I'm trying to visualize some horocylces in the hyperbolic plane, and I usually use Sage to do math stuff, but when I read the documentation reference for the hyperbolic geometry. I did not find a way to even draw a circle in the hyperbolic plane. I don't know if there is some package somewhere that can do this for me, or if I just need to write the code to do it. Alternatively, if anyone knows of a program that has horocycles and can act the modular group on horocycles that would also be great. Any help is appreciated.

Related

Monte Carlo Simulation using TOPAS, how to make a hollow sphere

I am trying to use TOPAS to create a geometric system that is a hollow sphere scorer around a point radiation source in a water world.
I have gotten everything to work, except the sphere is solid.
I have looked at the TOPAS documentation/Manual and I see there are examples for making a TsSphere (which is what I initually used), and for G4HPolycone which generates a hollow polycone and you can define the inner and outer dimensions.
I experimented a bit with terms like "G4HSphere" and "TsHSphere" on the off chance it just wasn't mentioned in that specific document (with RInner and ROuter defined as 9.5 and 10.5 respectively) but neither of those terms worked and resulted in termination of the code.
Any insight for how to make a hollow sphere is much appreciated :)
I figured it out - TsSphere still works, but adding the variables Rmin and Rmax then allows you to make that sphere hollow :)

How to create Draw Interactive rectangle with eyeShot

I am trying to write an autocad-like software with the eyeshot library, but I have difficulty performing the rectangle drawing function, can anyone help me, please?

Point Cloud - Principal Axes - Use of Inertia

I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.

How to calculate a pixels world space position on an image plane formed by a virtual camera?

First, this Calculating camera ray direction to 3d world pixel helped me a bit in understanding what the virtual camera setup is like. I don't understand how the vectors work in this setup, and I thought normalized device coordinates had to be used which led me to this page http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-6-rays-cameras-and-images/building-primary-rays-and-rendering-an-image/. What I am trying to do is build a ray tracer, and as the question states, find out the pixels position in order to shoot out a ray. What I really, really really would like, is an actually example showing a virtual camera setup, screen resolution and how to calculate a pixels position, then transform to world space coordinates. Experts!, Thank you for your help! :D
Multiply a matrix by the coordinates. What matrix? There are lots of choices. For example XNA uses a projection matrix, view matrix and world matrix. Applying all of them transforms pixel coordinates into world coordinates or vice versa. Breaking it down this way helps to understand the different transformations going on so you can more easily construct the matrices.
Isn't this webpage providing you already with 4 pages of explanation on how these rays are built? It seems like you haven't made the effort to read the content of the link you are referring to. I would suggest you read it first, try to understand it, maybe look at the source code they provide and come back with a real question regarding what you potentially don't understand.
It's all there, and I am not going to re-write what these people seem to have put a lot of energy already to explain! (nor should anybody else really ...).

Emulating a perspective rectangle on 2D

So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.

Resources