How can I convert PCFG in CNF for this grammar? - nlp

Given the following probabilistic context-free grammar -
1.NP -> ADJ N [0.6]
2.NP -> N [0.4]
3.N -> cat [0.2]
4.N -> dog [0.8]
what will be the CNF??

Given PCFG in CNF is given below.
1.NP -> ADJ N [0.6]
2.NP -> cat [0.08]
3.NP -> dog [0.32]
Because you need to get the same probability for the result by applying both the original and the converted set of rules (in CNF).

Be careful! You need to add keep the original rules 3 and 4 with the same probabilities in order for rule 1 to be productive

Related

How to generate distinct solutions in Prolog for '8 out of 10 cats does countdown' numbers game solver?

I wrote a Prolog program to find all solutions to any '8 out of 10 cats does countdown' number sequence. I am happy with the result. However, the solutions are not unique. I tried distincts() and reduced() from the "solution sequences" library. They did not produce unique solutions.
The problem is simple. you have a given list of six numbers [n1,n2,n3,n4,n5,n6] and a target number (R). Calculate R from any arbitrary combination of n1 to n6 using only +,-,*,/. You do not have to use all numbers but you can only use each number once. If two solutions are identical, only one must be generated and the other discarded. 
Sometimes there are equivalent results with different arrangement. Such as:
(100+3)*6*75/50+25
(100+3)*75*6/50+25  
Does anyone has any suggestions to eliminate such redundancy?
Each solution is a nested operators and integers. For example +(2,*(4,-(10,5))). This solution is an unbalanced binary tree with Arithmetic Operator for root and sibling nodes and numbers for leaf nodes. In order to have unique solutions, no two trees should be equivalent.
The Code:
:- use_module(library(lists)).
:- use_module(library(solution_sequences)).
solve(L,R,OP) :-
findnsols(10,OP,solve_(L,R,OP),S),
print_solutions(S).
solve_(L,R,OP) :-
distinct(find_op(L,OP)),
R =:= OP.
find_op(L,OP) :-
select(N1,L,Ln),
select(N2,Ln,[]),
N1 > N2,
member(OP,[+(N1,N2), -(N1,N2), *(N1,N2), /(N1,N2), N1, N2]).
find_op(L,OP) :-
select(N,L,Ln),
find_op(Ln,OP_),
OP_ > N,
member(OP,[+(OP_,N), -(OP_,N), *(OP_,N), /(OP_,N), OP_]).
print_solutions([]).
print_solutions([A|B]) :-
format('~w~n',A),
print_solutions(B).
Test:
solve([25,50,75,100,6,3],952,X)
Result
(100+3)*6*75/50+25 <- s1
((100+6)*3*75-50)/25 <- s2
(100+3)*75*6/50+25 <- s1
((100+6)*75*3-50)/25 <- s2
(100+3)*75/50*6+25 <- s1
true.
This code uses select/3 from the "lists" library.
UPDATE: Generate solutions useing DCG
The following is an attempt to generate solutions using DCG.  I was able to generate a more exhaustive solution set than in previous code posted. In a way, using DCG resulted in a more correct and elegant code. However, it is much more difficult to 'guess' what the code is doing.
The issue of redundant solutions still persist.
:- use_module(library(lists)).
:- use_module(library(solution_sequences)).
s(L) --> [L].
s(+(L,Ls)) --> [L],s(Ls).
s(*(L,Ls)) --> [L],s(Ls), {L =\= 1, Ls =\= 1, Ls =\= 0}.
s(-(L,Ls)) --> [L],s(Ls), {L =\= Ls, Ls =\= 0}.
s(/(L,Ls)) --> [L],s(Ls), {Ls =\= 1, Ls =\= 0}.
s(-(Ls,L)) --> [L],s(Ls), {L =\= Ls}.
s(/(Ls,L)) --> [L],s(Ls), {L =\= 1, Ls =\=0}.
solution_list([N,H|[]],S) :-
phrase(s(S),[N,H]).
solution_list([N,H|T],S) :-
phrase(s(S),[N,H|T]);
solution_list([H|T],S).
solve(L,R,S) :-
permutation(L,X),
solution_list(X,S),
R =:= S.
Does anyone has any suggestions to eliminate such redundancy?
I suggest to define a sorting weight on each node (inner or leaf). The number resulting from reducing the child node could be used, although ties will appear. These can be broken by additionally looking at topmost operations, sorting * before + for example. Actually one would like to have a sorting operation for which "tie" means "exactly the same subtree of arithmetic operations".
Since the OP is only seeking hints to help solve the problem.
Use DCG as a generator. (SWI-Prolog) (Prolog DCG Primer)
a. For a more refined version of using DCGs as a generator look for examples that use length/2. When you understand why you might see a beam of light shining down on you for a few moments (The light beam is a video gaming thing).
Use a constraint solver (SWI-Prolog) (CLP(FD) and CLP(ℤ): Prolog Integer Arithmetic) (Understanding CLP(FD) Prolog code of N-queens problem)
Since your solutions are constrained to the 6 numbers and the operators are always binary operators (+,-,*,/) then it is possible to enumerate the unique binary trees. If you know about OEIS then you can find related links that can help you solve this problem, but you need to give OEIS a sequence. To get a sequence for use with OEIS draw the trees for N from 2 to 5 and then enter that sequence into OEIS and see what you get. e.g.
N is the number of leaf (*) nodes.
N=2 ( 1 way to draw the tree )
-
/ \
* *
N=3 ( 2 ways to draw the tree )
- -
/ \ / \
- * * -
/ \ / \
* * * *
So the sequence starts with 1,2 ...
Hint - This page (link died) shows the images of the trees to see if you have done it correctly. In the description I use N to count the number of leaves (*), but on this page they use N to count the number of internal nodes (-). If we call my N N1 and the page N N2, then the relation is N2 = N1 - 1
This might be a Hamiltonian Cycle (Wolfram World) (Hamiltonianicity of the Tower of Hanoi Problem) Remember that there is a relation between Binary Trees and the Tower of Hanoi, but in your case there are added constraints. I don't know if the constraints eliminate a solution as a Hamiltonian Cycle.
Also don't think of building the final answer from a combination of any number and operator, but instead build subsets of operators and numbers, and then use those subsets to build the answer. You constrain at the start, not at the end.
Or put another way, don't think combinations at the start, but permutations of combinations (not sure if that is the correct pattern, but in the ball park) and then using that build the tree.

format float with stripped trailing zeros and decimal point, switching to scientific notation for large numbers in haskell

I am looking for a method in Haskell to format a floating point number in exactly the way described in this (Java) question, for optimal compactness. I think this may also be referred to as "normalization".
I will copy the example scenarios from that question here:
2.80000 -> Should be formatted as "2.8"
765.000000 -> "765" (notice how the decimal point is also dropped)
0.0073943162953 -> "0.00739432" (limit digits of precision—to 6 in this case)
0.0000073943162953 -> "7.39432E-6" (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case)
7394316295300000 -> "7.39432E+6" (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10)
0.0000073900000000 -> "7.39E-6" (strip trailing zeros from significand in scientific notation)
0.000007299998344 -> "7.3E-6" (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped)
Is there a built-in library in Haskell that will do this, or do I have to roll my own?
What you are looking for is formatRealFloat from GHC.Float module. I am not sure if there is documentation available for that module on haskell.org, but here is a summary of the module: http://www.cis.upenn.edu/~bcpierce/courses/advprog/resources/base/GHC.Float.html
You will need to modify it of course to suit your needs, but here is an example:
import GHC.Float
formatFloat :: RealFloat a => a -> String
formatFloat v
| v == 0 = "0"
| abs v < 1e-5 || abs v > 1e10 = formatRealFloat FFExponent Nothing v
| v - fromIntegral (floor v) == 0 = formatRealFloat FFFixed (Just 0) v
| otherwise = formatRealFloat FFGeneric Nothing v
Naturally, it will only work with ghc compiler.

Haskell Define a numeratic type with bounds

I'm learning Haskell and want to see explore the best practic coding functions with probabilitics distributions.
When coding with probablistic functions, it is typical that a function should return a Float value witin in [0, 1].
Is it possible to define a "Probability" data type that can only take values within that range ?
Thanks very much!
Tao
What you ask is not simple. Personally, I would rather use a function like this one:
checkBounds :: (Real a, Show a) => a -> a
checkBounds x | 0 <= x && x <= 1 = x
| otherwise = error $ show x ++ " is not in [0,1]"
But if you want to go further, have a look at this article, the probability package and this code snippet, maybe they could help.

Taxicab Numbers in Haskell

Taxicab number is defined as a positive integer that can be expressed as a sum of two cubes in at least two different ways.
1729=1^3+12^3=9^3+10^3
I wrote this code to produce a taxicab number which on running would give the nth smallest taxicab number:
taxicab :: Int -> Int
taxicab n = [(cube a + cube b)
| a <- [1..100],
b <- [(a+1)..100],
c <- [(a+1)..100],
d <- [(c+1)..100],
(cube a + cube b) == (cube c + cube d)]!!(n-1)
cube x = x * x * x
But the output I get is not what I expected.For the numbers one to three the code produces correct output but taxicab 4 produces 39312 instead of 20683.Another strange thing is that 39312 is originally the 6th smallest taxicab number-not fourth!
So why is this happening? Where is the flaw in my code?
I think you mistakenly believe that your list contains the taxicab numbers in an increasing order. This is the actual content of your list:
[1729,4104,13832,39312,704977,46683,216027,32832,110656,314496,
216125,439101,110808,373464,593047,149389,262656,885248,40033,
195841,20683,513000,805688,65728,134379,886464,515375,64232,171288,
443889,320264,165464,920673,842751,525824,955016,994688,327763,
558441,513856,984067,402597,1016496,1009736,684019]
Recall that a list comprehension such as [(a,b) | a<-[1..100],b<-[1..100]] will generate its pairs as follows:
[(1,1),...,(1,100),(2,1),...,(2,100),...,...,(100,100)]
Note that when a gets to its next value, b is restarted from 1. In your code, suppose you just found a taxicab number of the form a^3+b^3, and then no larger b gives you a taxicab. In such case the next value of a is tried. We might find a taxicab of the form (a+1)^3+b'^3 but there is no guarantee that this number will be larger, since b' is any number in [a+2..100], and can be smaller than b. This can also happen with larger values of a: when a increases, there's no guarantee its related taxicabs are larger than what we found before.
Also note that, for the same reason, an hypotetical taxicab of the form 101^3+b^3 could be smaller than the taxicabs you have on your list, but it does not occur there.
Finally, note that you function is quite inefficient, since every time you call taxicab n you recompute all the first n taxicab values.

Can good type systems distinguish between matrices in different bases?

My program (Hartree-Fock/iterative SCF) has two matrices F and F' which are really the same matrix expressed in two different bases. I just lost three hours of debugging time because I accidentally used F' instead of F. In C++, the type-checker doesn't catch this kind of error because both variables are Eigen::Matrix<double, 2, 2> objects.
I was wondering, for the Haskell/ML/etc. people, whether if you were writing this program you would have constructed a type system where F and F' had different types? What would that look like? I'm basically trying to get an idea how I can outsource some logic errors onto the type checker.
Edit: The basis of a matrix is like the unit. You can say 1L or however many gallons, they both mean the same thing. Or, to give a vector example, you can say (0,1) in Cartesian coordinates or (1,pi/2) in polar. But even though the meaning is the same, the numerical values are different.
Edit: Maybe units was the wrong analogy. I'm not looking for some kind of record type where I can specify that the first field will be litres and the second gallons, but rather a way to say that this matrix as a whole, is defined in terms of some other matrix (the basis), where the basis could be any matrix of the same dimensions. E.g., the constructor would look something like mkMatrix [[1, 2], [3, 4]] [[5, 6], [7, 8]] and then adding that object to another matrix would type-check only if both objects had the same matrix as their second parameters. Does that make sense?
Edit: definition on Wikipedia, worked examples
This is entirely possible in Haskell.
Statically checked dimensions
Haskell has arrays with statically checked dimensions, where the dimensions can be manipulated and checked statically, preventing indexing into the wrong dimension. Some examples:
This will only work on 2-D arrays:
multiplyMM :: Array DIM2 Double -> Array DIM2 Double -> Array DIM2 Double
An example from repa should give you a sense. Here, taking a diagonal requires a 2D array, returns a 1D array of the same type.
diagonal :: Array DIM2 e -> Array DIM1 e
or, from Matt sottile's repa tutorial, statically checked dimensions on a 3D matrix transform:
f :: Array DIM3 Double -> Array DIM2 Double
f u =
let slabX = (Z:.All:.All:.(0::Int))
slabY = (Z:.All:.All:.(1::Int))
u' = (slice u slabX) * (slice u slabX) +
(slice u slabY) * (slice u slabY)
in
R.map sqrt u'
Statically checked units
Another example from outside of matrix programming: statically checked units of dimension, making it a type error to confuse e.g. feet and meters, without doing the conversion.
Prelude> 3 *~ foot + 1 *~ metre
1.9144 m
or for a whole suite of SI units and quanities.
E.g. can't add things of different dimension, such as volumes and lengths:
> 1 *~ centi litre + 2 *~ inch
Error:
Expected type: Unit DVolume a1
Actual type: Unit DLength a0
So, following the repa-style array dimension types, I'd suggest adding a Base phantom type parameter to your array type, and using that to distinguish between bases. In Haskell, the index Dim
type argument gives the rank of the array (i.e. its shape), and you could do similarly.
Or, if by base you mean some dimension on the units, using dimensional types.
So, yep, this is almost a commodity technique in Haskell now, and there's some examples of designing with types like this to help you get started.
This is a very good question. I don't think you can encode the notion of a basis in most type systems, because essentially anything that the type checker does needs to be able to terminate, and making judgments about whether two real-valued vectors are equal is too difficult. You could have (2 v_1) + (2 v_2) or 2 (v_1 + v_2), for example. There are some languages which use dependent types [ wikipedia ], but these are relatively academic.
I think most of your debugging pain would be alleviated if you simply encoded the bases in which you matrix works along with the matrix. For example,
newtype Matrix = Matrix { transform :: [[Double]],
srcbasis :: [Double], dstbasis :: [Double] }
and then, when you M from basis a to b with N, check that N is from b to c, and return a matrix with basis a to c.
NOTE -- it seems most people here have programming instead of math background, so I'll provide short explanation here. Matrices are encodings of linear transformations between vector spaces. For example, if you're encoding a rotation by 45 degrees in R^2 (2-dimensional reals), then the standard way of encoding this in a matrix is saying that the standard basis vector e_1, written "[1, 0]", is sent to a combination of e_1 and e_2, namely [1/sqrt(2), 1/sqrt(2)]. The point is that you can encode the same rotation by saying where different vectors go, for example, you could say where you're sending [1,1] and [1,-1] instead of e_1=[1,0] and e_2=[0,1], and this would have a different matrix representation.
Edit 1
If you have a finite set of bases you are working with, you can do it...
{-# LANGUAGE EmptyDataDecls #-}
data BasisA
data BasisB
data BasisC
newtype Matrix a b = Matrix { coefficients :: [[Double]] }
multiply :: Matrix a b -> Matrix b c -> Matrix a c
multiply (Matrix a_coeff) (Matrix b_coeff) = (Matrix multiplied) :: Matrix a c
where multiplied = undefined -- your algorithm here
Then, in ghci (the interactive Haskell interpreter),
*Matrix> let m = Matrix [[1, 2], [3, 4]] :: Matrix BasisA BasisB
*Matrix> m `multiply` m
<interactive>:1:13:
Couldn't match expected type `BasisB'
against inferred type `BasisA'
*Matrix> let m2 = Matrix [[1, 2], [3, 4]] :: Matrix BasisB BasisC
*Matrix> m `multiply` m2
-- works after you finish defining show and the multiplication algorithm
While I realize this does not strictly address the (clarified) question – my apologies – it seems relevant at least in relation to Don Stewart's popular answer...
I am the author of the Haskell dimensional library that Don referenced and provided examples from. I have also been writing – somewhat under the radar – an experimental rudimentary linear algebra library based on dimensional. This linear algebra library statically tracks the sizes of vectors and matrices as well as the physical dimensions ("units") of their elements on a per element basis.
This last point – tracking physical dimensions on a per element basis – is rather challenging and perhaps overkill for most uses, and one could even argue that it makes little mathematical sense to have quantities of different physical dimensions as elements in any given vector/matrix. However, some linear algebra applications of interest to me such as kalman filtering and weighted least squares estimation typically use heterogeneous state vectors and covariance matrices.
Using a Kalman filter as an example, consider a state vector x = [d, v] which has physical dimensions [L, LT^-1]. The next (future) state vector is predicted by multiplication by the state transition matrix F, i.e.: x' = F x_. Clearly for this equation to make sense F cannot be arbitrary but must have size and physical dimensions [[1, T], [T^-1, 1]]. The predict_x' function below statically ensures that this relationship holds:
predict_x' :: (Num a, MatrixVector f x x) => Mat f a -> Vec x a -> Vec x a
predict_x' f x_ = f |*< x_
(The unsightly operator |*< denotes multiplication of a matrix on the left with a vector on the right.)
More generally, for an a priori state vector x_ of arbitrary size and with elements of arbitrary physical dimensions, passing a state transition matrix f with "incompatible" size and/or physical dimensions to predict_x' will cause a compile time error.
In F# (which originally evolved from OCaml), you can use units of measure. Andrew Kenned, who designed the feature (and also created a very interesting theory behind it) has a great series of articles that demonstrate it.
This can quite likely be used in your scenario - although I don't fully understand the question. For example, you can declare two unit types like this:
[<Measure>] type litre
[<Measure>] type gallon
Adding litres and gallons gives you a compile time error:
1.0<litre> + 1.0<gallon> // Error!
F# doesn't automatically insert conversion between different units, but you can write a conversion function:
let toLitres gal = gal * 3.78541178<litre/gallon>
1.0<litre> + (toLitres 1.0<gallon>)
The beautiful thing about units of measure in F# is that they are automatically inferred and functions are generic. If you multiply 1.0<gallon> * 1.0<gallon>, the result is 1.0<gallon^2>.
People have used this feature for various things - ranging from conversion of virtual meters to screen pixels (in solar system simulations) to converting currencies (dollars in financial systems). Although I'm not expert, it is quite likely that you could use it in some way for your problem domain too.
If it's expressed in a different base, you can just add a template parameter to act as the base. That will differentiate those types. A float is a float is a float- if you don't want two float values to be the same if they actually have the same value, then you need to tell the type system about it.

Resources