Creating a spherical CRS in Pyproj with "R_A" - geospatial

The Proj documentation states that the R_A parameter indicates "A sphere with the same surface area as the ellipsoid." (source)
So I tried this, believing I would get a spherical CRS with radius set to the WGS84 authalic radius:
import pyproj
crs = pyproj.CRS(proj='latlon', R_A='WGS84')
But was disappointed to find that this returned some non-sphere ellipsoid with f not equal to 0. I also tried explicitly specifying sphere=True, but got the same result.
What am I misunderstanding about the docs with regard to R_A? And is there a straightforward way to create a spherical CRS that uses this specific radius?

Related

Is it possible to test whether a given mesh is a hollow cylinder(like a pipe) or not?

I don't know much about geometric algorithms, but I just wanted to know if given a mesh, is there an algorithm that outputs a boolean which is true when the mesh is actually a hollow cylinder (like a straight pipe) or else, it will be false. Any reference to the algorithm if it exists would be helpful.
EDIT: I am relatively sure that the body has a surface mesh, basically the only the surfaces of the object is meshed and it uses triangles for meshing.
A key problem is to find the axis of the cylindre. If it is known, you are done. If it is unknown but the cylindre is known to be without holes and delimited by two perpendicular bases, the main axis of inertia can be used. Otherwise, it should be constructible from the coordinates of five points (A Cylinder of Revolution on Five Points http://www.cim.mcgill.ca/~paul/12ICGG6Aw.pdf).
When you know the axis, it suffices to check that the distance of every point to the axis is constant. In case the mesh describes a solid (i.e. a surface with thickness), make sure to pick the points on the outer surface, for example using the orientation of the normals. There will be two distances and the lateral faces must be ignored.

Live camera to shape distance calculation based on computer vision

The goal is to live detect the walls and export the distacne to wall .There is a setup , A closed 4 wall , one set of unique & ideal shape in each wall ( Triangle , Square .....) A robot with camera will roam inside the walls and have computer vision. Robot should detect the shape and export the distance between camera and wall( or that shape ).
I have implemented this goal by Opencv and the shape detection ( cv2.approxPolyDP ) and distance calculation ( perimeter calculation and edge counting then conversion of pixel length to real distance ).
It perfectly works in 90 degree angle , but not effective when happening in other angles.
Any better way of doing it.
Thanks
for cnt in contours[1:]:
# considering countours from 1 because from practical experience whole frame is often considered as a contour
area = cv2.contourArea(cnt)
# area of detected contour
approx = cv2.approxPolyDP(cnt, 0.02*cv2.arcLength(cnt, True), True)
#It predicts and makes pixel connected contour to a shape
x = approx.ravel()[0]
y = approx.ravel()[1]
# detected shape type label text placement
perimeter = cv2.arcLength(cnt,True)
# find perimeter
in other degrees you have the perspective view of the shapes.
you must use Geometric Transformations to neutralize perspective effect (using a known-shape object or angle of the camera).
also consider that using rectified images is highly recommended Camera Calibration.
Edit:
lets assume you have a square on the wall. when camera capture an image from non-90-degree straight-on view of the object. the square is not align and looks out of shape, this causes measurement error.
but you can use cv2.getPerspectiveTransform() .the function calculates the 3x3 matrix of a perspective transform M.
after that use warped = cv2.warpPerspective(img, M, (w,h)) and apply perspective transformation to the image. now the square (in warped image) looks like 90-degree straight-on view and your current code works well on the output image (warped image).
and excuse me for bad explanation. maybe this blog posts can help you:
4 Point OpenCV getPerspective Transform Example
Find distance from camera to object/marker using Python and OpenCV

What is the reference point for measuring angles in OpenCV?

I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.

Phaser Points from Polar Coordinates

I want to update a sprite's P2 body force to equal a constant value in a given direction. With polar coordinates, that's easy: I just set the magnitude and direction to what I want. With phaser points though, the only function for setting a point's coordinates directly (Phaser.Point#set) only seems to support cartesian coordinates.
Is there an easy way to set a Phaser point to a set of polar coordinates, without having to convert from polar to cartesian coordinates myself?
Given the unfortunate lack of a constructor for this task, use a combination of setMagnitude() and rotate(). As a side, I recommend writing a utility method to do this for you so you only require a (single) simple call each time you want to instantiate a Point with polar coordinates.
function pointFromPolar(r, t, degrees) {
return new Phaser.Point(1,0).setMagnitude(r).rotate(0,0,t,degrees);
}
r is magnitude, t is angle in radians. If degrees is true, t is actually in degrees.

Colors: CIE XYZ model - Chromaticity graph

I want to draw a section graph for XYZ CIE color model, like this one:
Do you have any idea how to do it?
Very briefly...
You can plot the spectral line (the horseshoe) by plotting the xy (I have XY not xy) data for the standard observer. Then you can find the polygon you need to fill by applying a convex hull algorithm to the points. Make a list of xy values you want to paint within the polygon. Find the z value for a fixed luminance by z = 1 - x - y. Convert to RGB - you will need a function called something like XYZtoRGB (there is a python module, or use the transform on wikipedia). You may want to increase the luminance by multiplying all the numbers by a constant or something first. Set the pixels at the xy locations to the RGB values. Plot along with the convex hull and/or the spectral line you calculated.
I have the data for the standard 2deg (I think) observer (I can't find a link) - you will need to divide by X+Y+Z to convert from XYZ to xyz. Send me a message if you want me to send them to you, there is too much data to post here.
The colour Python module has a plotting submodule where this kind of plot is one of the provided plots. See documentation for plot_chromaticity_diagram_CIE1931 and plot_sds_in_chromaticity_diagram_CIE1931
It uses Matplotlib under the hood.

Resources