What is a basic explanation of Finite Fields? - finite-field

I have zero background and I have never seen these symbols before. Can someone explain what is going on here?

In a simple explanation: A finite field, is a finite set, meaning multiplication, addition, subtraction and division are defined and follow the rules of field axioms. It contains a finite number of elements. A basic example of Finite Fields are
p = prime
modulo P
If you are looking for more information, regarding programming with Finite Fields
See this source: https://jeremykun.com/2014/03/13/programming-with-finite-fields/
It might seem a bit wild, but if you read more about it, it will start to make sense.
I'd also research into Finite Field Arithmetic.
I hope my answer was useful
- yosh

Related

Unclear why functions from Data.Ratio are not exposed and how to work around

I am implementing an algorithm using Data.Ratio (convergents of continued fractions).
However, I encounter two obstacles:
The algorithm starts with the fraction 1%0 - but this throws a zero denominator exception.
I would like to pattern match the constructor a :% b
I was exploring on hackage. An in particular the source seems to be using exactly these features (e.g. defining infinity = 1 :% 0, or pattern matching for numerator).
As beginner, I am also confused where it is determined that (%), numerator and such are exposed to me, but not infinity and (:%).
I have already made a dirty workaround using a tuple of integers, but it seems silly to reinvent the wheel about something so trivial.
Also would be nice to learn how read the source which functions are exposed.
They aren't exported precisely to prevent people from doing stuff like this. See, the type
data Ratio a = a:%a
contains too many values. In particular, e.g. 2/6 and 3/9 are actually the same number in ℚ and both represented by 1:%3. Thus, 2:%6 is in fact an illegal value, and so is, sure enough, 1:%0. Or it might be legal but all functions know how to treat them so 2:%6 is for all observable means equal to 1:%3 – I don't in fact know which of these options GHC chooses, but at any rate it's an implementation detail and could change in future releases without notice.
If the library authors themselves use such values for e.g. optimisation tricks that's one thing – they have after all full control over any algorithmic details and any undefined behaviour that could arise. But if users got to construct such values, it would result in brittle code.
So – if you find yourself starting an algorithm with 1/0, then you should indeed not use Ratio at all there but simply store numerator and denominator in a plain tuple, which has no such issues, and only make the final result a Ratio with %.

How would I construct an integer optimization model corresponding to a graph

Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.

O(n^2) (or O(n^2lg(n)) ?)algorithm to calculate the longest common subsequence (LCS) of two 'ring' string

This is a problem appeared in today's Pacific NW Region Programming Contest during which no one solved it. It is problem B and the complete problem set is here: http://www.acmicpc-pacnw.org/icpc-statements-2011.zip. There is a well-known O(n^2) algorithm for LCS of two strings using Dynamic Programming. But when these strings are extended to rings I have no idea...
P.S. note that it is subsequence rather than substring, so the elements do not need to be adjacent to each other
P.S. It might not be O(n^2) but O(n^2lgn) or something that can give the result in 5 seconds on a common computer.
Searching the web, this appears to be covered by section 4.3 of the paper "Incremental String Comparison", by Landau, Myers, and Schmidt at cost O(ne) < O(n^2), where I think e is the edit distance. This paper also references a previous paper by Maes giving cost O(mn log m) with more general edit costs - "On a cyclic string to string correcting problem". Expecting a contestant to reproduce either of these papers seems pretty demanding to me - but as far as I can see the question does ask for the longest common subsequence on cyclic strings.
You can double the first and second string and then use the ordinary method, and later wrap the positions around.
It is a good idea to "double" the strings and apply the standard dynamic programing algorithm. The problem with it is that to get the optimal cyclic LCS one then has to "start the algorithm from multiple initial conditions". Just one initial condition (e.g. setting all Lij variables to 0 at the boundaries) will not do in general. In practice it turns out that the number of initial states that are needed are O(N) in number (they span a diagonal), so one gets back to an O(N^3) algorithm.
However, the approach does has some virtue as it can be used to design efficient O(N^2) heuristics (not exact but near exact) for CLCS.
I do not know if a true O(N^2) exist, and would be very interested if someone knows one.
The CLCS problem has quite interesting properties of "periodicity": the length of a CLCS of
p-times reapeated strings is p times the CLCS of the strings. This can be proved by adopting a geometric view off the problem.
Also, there are some additional benefits of the problem: it can be shown that if Lc(N) denotes the averaged value of the CLCS length of two random strings of length N, then
|Lc(N)-CN| is O(\sqrt{N}) where C is Chvatal-Sankoff's constant. For the averaged length L(N) of the standard LCS, the only rate result of which I know says that |L(N)-CN| is O(sqrt(Nlog N)). There could be a nice way to compare Lc(N) with L(N) but I don't know it.
Another question: it is clear that the CLCS length is not superadditive contrary to the LCS length. By this I mean it is not true that CLCS(X1X2,Y1Y2) is always greater than CLCS(X1,Y1)+CLCS(X2,Y2) (it is very easy to find counter examples with a computer).
But it seems possible that the averaged length Lc(N) is superadditive (Lc(N1+N2) greater than Lc(N1)+Lc(N2)) - though if there is a proof I don't know it.
One modest interest in this question is that the values Lc(N)/N for the first few values of N would then provide good bounds to the Chvatal-Sankoff constant (much better than L(N)/N).
As a followup to mcdowella's answer, I'd like to point out that the O(n^2 lg n) solution presented in Maes' paper is the intended solution to the contest problem (check http://www.acmicpc-pacnw.org/ProblemSet/2011/solutions.zip). The O(ne) solution in Landau et al's paper does NOT apply to this problem, as that paper is targeted at edit distance, not LCS. In particular, the solution to cyclic edit distance only applies if the edit operations (add, delete, replace) all have unit (1, 1, 1) cost. LCS, on the other hand, is equivalent to edit distances with (add, delete, replace) costs (1, 1, 2). These are not equivalent to each other; for example, consider the input strings "ABC" and "CXY" (for the acyclic case; you can construct cyclic counterexamples similarly). The LCS of the two strings is "C", but the minimum unit-cost edit is to replace each character in turn.
At 110 lines but no complex data structures, Maes' solution falls towards the upper end of what is reasonable to implement in a contest setting. Even if Landau et al's solution could be adapted to handle cyclic LCS, the complexity of the data structure makes it infeasible in a contest setting.
Last but not least, I'd like to point out that an O(n^2) solution DOES exist for CLCS, described here: http://arxiv.org/abs/1208.0396 At 60 lines, no complex data structures, and only 2 arrays, this solution is quite reasonable to implement in a contest setting. Arriving at the solution might be a different matter, though.

A reverse inference engine (find a random X for which foo(X) is true)

I am aware that languages like Prolog allow you to write things like the following:
mortal(X) :- man(X). % All men are mortal
man(socrates). % Socrates is a man
?- mortal(socrates). % Is Socrates mortal?
yes
What I want is something like this, but backwards. Suppose I have this:
mortal(X) :- man(X).
man(socrates).
man(plato).
man(aristotle).
I then ask it to give me a random X for which mortal(X) is true (thus it should give me one of 'socrates', 'plato', or 'aristotle' according to some random seed).
My questions are:
Does this sort of reverse inference have a name?
Are there any languages or libraries that support it?
EDIT
As somebody below pointed out, you can simply ask mortal(X) and it will return all X, from which you can simply pick a random one from the list. What if, however, that list would be very large, perhaps in the billions? Obviously in that case it wouldn't do to generate every possible result before picking one.
To see how this would be a practical problem, imagine a simple grammar that generated a random sentence of the form "adjective1 noun1 adverb transitive_verb adjective2 noun2". If the lists of adjectives, nouns, verbs, etc. are very large, you can see how the combinatorial explosion is a problem. If each list had 1000 words, you'd have 1000^6 possible sentences.
Instead of the deep-first search of Prolog, a randomized deep-first search strategy could be easyly implemented. All that is required is to randomize the program flow at choice points so that every time a disjunction is reached a random pole on the search tree (= prolog program) is selected instead of the first.
Though, note that this approach does not guarantees that all the solutions will be equally probable. To guarantee that, it is required to known in advance how many solutions will be generated by every pole to weight the randomization accordingly.
I've never used Prolog or anything similar, but judging by what Wikipedia says on the subject, asking
?- mortal(X).
should list everything for which mortal is true. After that, just pick one of the results.
So to answer your questions,
I'd go with "a query with a variable in it"
From what I can tell, Prolog itself should support it quite fine.
I dont think that you can calculate the nth solution directly but you can calculate the n first solutions (n randomly picked) and pick the last. Of course this would be problematic if n=10^(big_number)...
You could also do something like
mortal(ID,X) :- man(ID,X).
man(X):- random(1,4,ID), man(ID,X).
man(1,socrates).
man(2,plato).
man(3,aristotle).
but the problem is that if not every man was mortal, for example if only 1 out of 1000000 was mortal you would have to search a lot. It would be like searching for solutions for an equation by trying random numbers till you find one.
You could develop some sort of heuristic to find a solution close to the number but that may affect (negatively) the randomness.
I suspect that there is no way to do it more efficiently: you either have to calculate the set of solutions and pick one or pick one member of the superset of all solutions till you find one solution. But don't take my word for it xd

Why do most programming languages only give one answer to square root of 4?

Most programming languages give 2 as the answer to square root of 4. However, there are two answers: 2 and -2. Is there any particular reason, historical or otherwise, why only one answer is usually given?
Because:
In mathematics, √x commonly, unless otherwise specified, refers to the principal (i.e. positive) root of x [http://mathworld.wolfram.com/SquareRoot.html].
Some languages don't have the ability to return more than one value.
Since you can just apply negation, returning both would be redundant.
If the square root method returned two values, then one of those two would practically always be discarded. In addition to wasting memory and complexity on the extra return value, it would be little used. Everyone knows that you can multiple the answer returned by -1 and get the other root.
I expect that only mathematical languages would return multiple values here, perhaps as an array or matrix. But for most general-purpose programming languages, there is negligible gain and non-negligible cost to doing as you suggest.
Some thoughts:
Historically, functions were defined as procedures which returned a single value.
It would have been fiddly (using primitive programming constructs) to define a clean function which returned multiple values like this.
There are always exceptions to the rule:
0 for example only has a single root (0).
You cannot take the square root of a negative number (unless the language supports complex numbers). This could be treated as an exception (like "divide by 0") in languages which don't support imaginary numbers or the complex number system.
It is usually simple to deduce the 2 square roots (simply negate the value returned by the function). This was probably left as an exercise by the caller of the sqrt() function, if their domain depended on dealing with both the positive (+) and negative (-) roots.
It's easier to return one number than to return two. Most engineering decisions are made in this manner.
There are many functions which only return 1 answer from 2 or more possibilities. Arc tangent for example. The arc tangent of 1 is returned as 45 degrees, but it could also be 225 or even 405. As with many things in life and programming there is a convention we know and can rely on. Square root functions return positive values is one of them. It is up to us, the programmers, to keep in mind there are other solutions and to act on them if needed in code.
By the way this is a common issue in robotics when dealing with kinematics and inverse kinematics equations where there are multiple solutions of links positions corresponding to Cartesian positions.
In mathematics, by convention it's always assumed that you want the positive square root of something unless you explicitly say otherwise. The square root of four really is two. If you want the negative answer, put a negative sign in front. If you want both, put the plus-or-minus sign. Without this convention it would be impossible to write equations; you would never know what the person intended even if they did put a sign in front (because it could be the negative of the negative square root, for example). Also, how exactly would you write any kind of computer code involving mathematics if operators started returning two values? It would break everything.
The unfortunate exception to this convention is when solving for variables. In the following equation:
x^2 = 4
You have no choice but to consider both possible values for X. if you take the square root of both sides, you get x = 2 but now you must put in the plus or minus sign to make sure you aren't missing any possible solutions. Also, remember that in this case it's technically X that can be either plus or minus, not the square root of four.
Because multiple return types are annoying to implement. If you really need the other result, isn't it easy enough to just multiple the result by -1?
Because most programmers only want one answer.
It's easy enough to generate the negative value from the positive value if the caller wants it. For most code the caller only uses the positive value.
However, nowadays it's easy to return two values in many languages. In JavaScript:
var sqrts=function(x) {
var s=Math.sqrt(x);
if (s>0) {
return [s,-s];
} else {
return [0];
}
}
As long as the caller knows to iterate through the array that comes back, you're gold.
>sqrts(2)
[1.4142135623730951, -1.4142135623730951]
I think because the function is called "sqrt", and if you wanted multiple roots, you would have to call the function "sqrts", which doesn't exist, so you can't do it.
The more serious answer is that you're suggesting a specific instance of a larger issue. Many equations, and commonly inverse functions (including sqrt) have multiple possible solutions, such as arcsin, etc, and these are, in general, an issue. With arcsin, for example, should one return an infinite number of answers? See, for example, discussions about branch cuts.
Because it was historically defined{{citation needed}} as the function which gives the side length of a square of known surface. And length is positive in that context.
you can always tell what is the other number, so maybe it's not necessary to return both of them.
It's likely because when people use a calculator to figure out a square root, they only want the positive value.
Go one step further and ask why your calculator won't let you take the square root of a negative number. It's possible, using imaginary numbers, but the average user has absolutely zero use for this.
On imaginary numbers.

Resources