How would one define a type for dimensions?
Can you define a type in terms of another type? (i.e. an inch is 72 PostScript points).
Would it even make sense to make a new type for a dimension unit?
I've seen libraries for other kind of units, but the dimensions I'd be interested in are:
scaled point (smallest, maybe Int?), point (65536 scaled points), pica (12 points), etc.
I think this is where phantom types can help. The dimensional package is a good place to start to understand them. The code is literate Haskell and very readable so I'd recommend reading through that.
Related
Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.
In Postgis there are two very similar functions. One is st_isValid, the other one is st_isSimple. I'd like to understand the difference between both for Polygons. For the st_isValid we have:
Some of the rules of polygon validity feel obvious, and others feel arbitrary (and in fact, are arbitrary).
Polygon rings must close.
Rings that define holes should be inside rings that define exterior boundaries.
Rings may not self-intersect (they may neither touch nor cross one another).
Rings may not touch other rings, except at a point.
For the st_isSimple we've got:
Returns true if this Geometry has no anomalous geometric points, such as self intersection or self tangency. For more information on the OGC's definition of geometry simplicity and validity, refer to "Ensuring OpenGIS compliancy of geometries"
Does it mean that any valid polygon is automatically simple?
Both functions check for similar OGC definition compliancy of geometries, but are defined for different geometries (by dimension);
By OGC definition
a [Multi]LineString can (should) be simple
a [Multi]Polygon can (should) be valid
This implies that
a simple [Multi]LineString is always considered valid
a valid [Multi]Polygon is always considered simple (as in, it must have at least one simple closed LineString ring)
thus the answer is yes.
Strictly speaking, using the inherent checks of the OGC defined functionality on the 'wrong' geometry type is useless.
PostGIS, however, liberally extends the functionality of ST_IsValid to use the correct checks for all geometry types.
I try to understand what is category in Haskell with Vector Space.
I draw a picture, can anyone review it for me.
I'm not sure this is good/right picture for category.
One thing that's perhaps not quite accurate in your picture: it's important that the objects are vector spaces (not vectors!), whereas the morphisms are matrices (I prefer to say, linear mappings. These are single entities at any rate, not “spaces of matrices”). And while matrices are themselves elements of vector spaces, a matrix is not a vector space but just a single vector. So, the “objects” bubbles should be changed to contain not matrices but sets of matrices. And, getting more into details: a linear mapping between spaces of 3×3 matrices would actually be a 9×9 matrix, not another 3×3 matrix (though it makes sense to see it as a (3×3)×(3×3) tensor).Apart from that, great! I think the category vector spaces is a very good entry point to category theory.
This does however not directly relate to “what is category in Haskell”, except insofar as categories in Haskell also obey the category laws. If you just wanted to understand these laws by some example to grok categories in general and ultimately also use them in Haskell – fair.
But if you actually want to use particular categories such as Vectk themselves in Haskell, that's a bit more of a tricky story because what most people call “categories in Haskell” are actually way too weak: they require that all Haskell types can be objects. But most types can't sensibly be seen as vector spaces, so you need a more nuanced notion of category. This is provided by the subhask library or my own constrained-categories. Both have been used to implement the category of vector spaces:
http://hackage.haskell.org/package/subhask-0.1.1.0/docs/SubHask-Algebra-Vector.html
http://hackage.haskell.org/package/linearmap-category-0.3.4.0/docs/Math-LinearMap-Category.html
For background, I'm working on a CSG (Constructive Solid Geometry) library.
Given polygon meshes that enclose regions of space, this library will allow those meshes to be treated as sets of the points that they enclose. And also allow the calculation of the binary operations, union, intersection and difference, on pairs of meshes.
The library will also support set negation.
If required I could also define an elem like function, with type Point -> Mesh -> Bool, it would not be possible to define an add function, as there exists no meaningful way to add a single point to a mesh.
Does a typeclass exist for types that support these operations?
And if not, what would a good implementation of a suitable typeclass look like?
Many people have tried making unifying typeclasses for container types, and none of them has ever caught on, each for its own reasons. I recommend not bothering; the current standard idiom is to just define the operations for your new type with some standardish names, not worrying about name clashes, with whatever type makes the most sense for your new container. Expect users to import your module qualified and aliased to avoid name clashes (and aid readers as a nice side effect).
I am trying to make basic constructions like "get intersection of a line and circle", "connect two points", "create a circle" with CGAL. However, the choice of kernel seems to be a problem. When I use exact 2D circular kernel, I get Circular_arc_point_2 from intersections, and then I can't use this data type to create lines, circles; and converting to Point_2 seems to introduce error and then stores the approximate value as an exact number. Additionally this problem seems to be independent of the choice of kernel.
What is a proper way of those constructions? Exact and approximate number are all fine as long as the data type is consistent in these constructions.
In the worst case, if this is unresolvable, is there any other free library with this functionality?
The predefined Exact_circular_kernel_2 only uses rational number as its field type. To cover every constructible point, you should define a circular kernel that uses a FieldWithSqrt. With existing types and traits it is simple:
using L = CGAL::Exact_predicates_exact_constructions_kernel_with_sqrt;
using A = CGAL::Algebraic_kernel_for_circles_2_2<L::FT>;
using K = CGAL::Circular_kernel_2<L, A>;
Then you can convert a Circular_arc_point_2 p to Point_2 with the exact coordinates:
K::Point_2 q(p.x(), p.y());
Circular_arc_point_2 is a point which coordinates are algebraic numbers of degree 2 (only way to represent exactly the intersection of 2 circles). You can convert the point into regular floating point coordinate Point_2 by using for example Point_2(to_double(cp.x()), to_double(cp.y())) but then you'll be loosing the exactness.