Solve for a specific variable in terms of another specified variable - simpy

I am trying to learn sympy by going through some textbook problems.
I have one that is asking to get a formula for Rankine in terms of Kelvin.
This is easy to solve without simpy (given the formulas):
(𝑓=π‘Ÿβˆ’459.4, 𝑐=5𝑓/9βˆ’1609, π‘˜=𝑐+273)
With some algebra, π‘˜=5r/9
But I do not know how to get solve explicitly for k in terms of r with simpy. I can have it solve the system of eqns but not sure how to specify for which variable in terms of which.
My attempt:
import sympy as sp
r, c, k, f = symbols('r c k f')
eq1 = sp.Eq(f, r-459.4) # f=r-459.4
eq2 = sp.Eq(c, (5/9)*f-(160/9)) # c = (5/9)*f-(160/9)
eq3 = sp.Eq(k, c+273) # k = c+273
ans = sp.solve((eq1, eq2, eq3), (r, c, k, f)) #3 eqns, 4 unknowns (f, r, c, k)
ans
yielded
{𝑐:0.555555555555556π‘“βˆ’17.7777777777778, π‘˜:0.555555555555556𝑓+255.222222222222, π‘Ÿ:𝑓+459.4}

I think I found it out using linsolve instead
import sympy as sp
eq1 = sp.Eq(f, r-459.4) # f=r-459.4
eq2 = sp.Eq(c, (5/9)*f-(160/9)) # c = (5/9)*f-(160/9)
eq3 = sp.Eq(k, c+273) # k = c+273
sp.linsolve([eq2, eq3], [k,c]) #{(0.555555555555556𝑓+255.222222222222, 0.555555555555556π‘“βˆ’17.7777777777778)}
sp.linsolve([eq1, eq2], [c,f]) #{(0.555555555555556π‘Ÿβˆ’273.0, π‘Ÿβˆ’459.4)}
sp.linsolve([eq1, eq2, eq3], [k,c,f]) #{(0.555555555555556π‘Ÿ, 0.555555555555556π‘Ÿβˆ’273.0, π‘Ÿβˆ’459.4)}

Related

Convert DFA to RE

I constructed a finite automata for the language L of all strings made of the symbols 0, 1 and 2 (Ξ£ = {0, 1, 2}) where the last symbol is not smaller than the first symbol. E.g., the strings 0, 2012, 01231 and 102 are in the language, but 10, 2021 and 201 are not in the language.
Then from that an GNFA so I can convert to RE.
My RE looks like this:
(0(0+1+2)* )(1(0(1+2)+1+2)* )(2((0+1)2+2))*)
I have no idea if this is correct, as I think I understand RE but not entirely sure.
Could someone please tell me if it’s correct and if not why?
There is a general method to convert any DFA into a regular expression, and is probably what you should be using to solve this homework problem.
For your attempt specifically, you can tell whether an RE is incorrect by finding a word that should be in the language, but that your RE doesn't accept, or a word that shouldn't be in the language that the RE does accept. In this case, the string 1002 should be in the language, but the RE doesn't match it.
There are two primary reasons why this string isn't matched. The first is that there should be a union rather than a concatenation between the three major parts of the language (words starting with 0, 1 and 2, respectively:
(0(0+1+2)*) (1(0(1+2)+1+2)*) (2((0+1)2+2))*) // wrong
(0(0+1+2)*) + (1(0(1+2)+1+2)*) + (2((0+1)2+2))*) // better
The second problem is that in the 1 and 2 cases, the digits smaller than the starting digit need to be repeatable:
(1(0 (1+2)+1+2)*) // wrong
(1(0*(1+2)+1+2)*) // better
If you do both of those things, the RE will be correct. I'll leave it as an exercise for you to follow that step for the 2 case.
The next thing you can try is find a way to make the RE more compact:
(1(0*(1+2)+1+2)*) // verbose
(1(0*(1+2))*) // equivalent, but more compact
This last step is just a matter of preference. You don't need the trailing +1+2 because 0* can be of zero length, so 0*(1+2) covers the +1+2 case.
You can use an algorithm but this DFA might be easy enough to convert as a one-off.
First, note that if the first symbol seen in the initial state is 0, you transition to state A and remain there. A is accepting. This means any string beginning with 0 is accepted. Thus, our regular expression might as well have a term like 0(0+1+2)* in it.
Second, note that if the first symbol seen in the initial state is 1, you transition to state B and remain in states B and D from that point on. You only leave B if you see 0 and you stay out of B as long as you keep seeing 0. The only way to end on D is if the last symbol you saw was 0. Therefore, strings beginning with 1 are accepted if and only if the strings don't end in 0. We can have a term like 1(0+1+2)*(1+2) in our regular expression as well to cover these cases.
Third, note that if the first symbol seen in the initial state is 2, you transition to state C and remain in states C and E from that point on. You leave state C if you see anything but 2 and stay out of B until you see a 2 again. The only way to end up on C is if the last symbol you saw was 2. Therefore, strings beginning with 2 are accepted if and only if the strings end in 2. We can have a term like 2(0+1+2)*(2) in our regular expression as well to cover these cases.
Finally, we see that there are no other cases to consider; our three terms cover all cases and the union of them fully describes our language:
0(0+1+2)* + 1(0+1+2)*(1+2) + 2(0+1+2)*2
It was easy to just write out the answer here because this DFA is sort of like three simple DFAs put together with a start state. More complicated DFAs might be easier to convert to REs using algorithms that don't require you understand or follow what the DFA is doing.
Note that if the start state is accepting (mentioned in a comment on another answer) the RE changes as follows:
e + 0(0+1+2)* + 1(0+1+2)*(1+2) + 2(0+1+2)*2
Basically, we just tack the empty string onto it since it is not already generated by any of the other parts of the aggregate expression.
You have the equivalent of what is known as a right-linear system. It's right-linear because the variables occur on the right hand sides only to the first degree and only on the right-hand sides of each term. The system that you have may be written - with a change in labels from 0,1,2 to u,v,w - as
S β‰₯ u A + v B + w C
A β‰₯ 1 + (u + v + w) A
B β‰₯ 1 + u D + (v + w) B
C β‰₯ 1 + (u + v) E + w C
D β‰₯ u D + (v + w) B
E β‰₯ (u + v) E + w C
The underlying algebra is known as a Kleene algebra. It is defined by the following identities that serve as its fundamental properties
(xy)z = x(yz), x1 = x = 1x,
(x + y) + z = x + (y + z), x + 0 = x = 0 + x,
y0z = 0, w(x + y)z = wxz + wyz,
x + y = y + x, x + x = x,
with a partial ordering relation defined by
x ≀ y ⇔ y β‰₯ x ⇔ βˆƒz(x + z = y) ⇔ x + y = y
With respect to this ordering relation, all finite subsets have least upper bounds, including the following
0 = ⋁ βˆ…, x + y = ⋁ {x, y}
The sum operator "+" is the least upper bound operator.
The system you have is a right-linear fixed point system, since it expresses the variables on the left as a (right-linear) function, as given on the right, of the variables. The object being specified by the system is the least solution with respect to the ordering; i.e. the least fixed point solution; and the regular expression sought out is the value that the main variable has in the least fixed point solution.
The last axiom(s) for Kleene algebras can be stated in any of a number of equivalent ways, including the following:
0* = 1
the least fixed point solution to x β‰₯ a + bx + xc is x = b* a c*.
There are other ways to express it. A consequence is that one has identities such as the following:
1 + a a* = a* = 1 + a* a
(a + b)* = a* (b a*)*
(a b)* a = a (b a)*
In general, right linear systems, such as the one corresponding to your problem may be written in vector-matrix form as πͺ β‰₯ 𝐚 + A πͺ, with the least fixed point solution given in matrix form as πͺ = A* 𝐚. The central theorem of Kleene algebras is that all finite right-linear systems have least fixed point solutions; so that one can actually define matrix algebras over Kleene algebras with product and sum given respectively as matrix product and matrix sum, and that this algebra can be made into a Kleene algebra with a suitably-defined matrix star operation through which the least fixed point solution is expressed. If the matrix A decomposes into block form as
B C
D E
then the star A* of the matrix has the block form
(B + C E* D)* (B + C E* D)* C E*
(E + D B* C)* D B* (E + D B* C)*
So, what this is actually saying is that for a vector-matrix system of the form
x β‰₯ a + B x + C y
y β‰₯ b + D x + E y
the least fixed point solution is given by
x = (B + C E* D)* (a + C E* b)
y = (E + D B* C)* (D B* a + b)
The star of a matrix, if expressed directly in terms of its components, will generally be huge and highly redundant. For an nΓ—n matrix, it has size O(nΒ³) - cubic in n - if you allow for redundant sub-expressions to be defined by macros. Otherwise, if you in-line insert all the redundancy then I think it blows up to a highly-redundant mess that is exponential in n in size.
So, there's intelligence required and involved (literally meaning: AI) in finding or pruning optimal forms that avoid the blow-up as much as possible. That's a non-trivial job for any purported matrix solver and regular expression synthesis compiler.
An heuristic, for your system, is to solve for the variables that don't have a "1" on the right-hand side and in-line substitute the solutions - and to work from bottom-up in terms of the dependency chain of the variables. That would mean starting with D and E first
D β‰₯ u* (v + w) B
E β‰₯ (u + v)* w C
In-line substitute into the other inequations
S β‰₯ u A + v B + w C
A β‰₯ 1 + (u + v + w) A
B β‰₯ 1 + u u* (v + w) B + (v + w) B
C β‰₯ 1 + (u + v) (u + v)* w C + w C
Apply Kleene algebra identities (e.g. x x* y + y = x* y)
S β‰₯ u A + v B + w C
A β‰₯ 1 + (u + v + w) A
B β‰₯ 1 + u* (v + w) B
C β‰₯ 1 + (u + v)* w C
Solve for the next layer of dependency up: A, B and C:
A β‰₯ (u + v + w)*
B β‰₯ (u* (v + w))*
C β‰₯ ((u + v)* w)*
Apply some more Kleene algebra (e.g. (x* y)* = 1 + (x + y)* y) to get
B β‰₯ 1 + N (v + w)
C β‰₯ 1 + N w
where, for convenience we set N = (u + v + w)*. In-line substitute at the top-level:
S β‰₯ u N + v (1 + N (v + w)) + w (1 + N w).
The least fixed point solution, in the main variable S, is thus:
S = u N + v + v N (v + w) + w + w N w.
where
N = (u + v + w)*.
As you can already see, even with this simple example, there's a lot of chess-playing to navigate through the system to find an optimally-pruned solution. So, it's certainly not a trivial problem. What you're essentially doing is synthesizing a control-flow structure for a program in a structured programming language from a set of goto's ... essentially the core process of reverse-compiling from assembly language to a high level language.
One measure of optimization is that of minimizing the loop-depth - which here means minimizing the depth of the stars or the star height. For example, the expression x* (y x*)* has star-height 2 but reduces to (x + y)*, which has star height 1. Methods for reducing star-height come out of the research by Hashiguchi and his resolution of the minimal star-height problem. His proof and solution (dating, I believe, from the 1980's or 1990's) is complex and to this day the process still goes on of making something more practical of it and rendering it in more accessible form.
Hashiguchi's formulation was cast in the older 1950's and 1960's formulation, predating the axiomatization of Kleene algebras (which was in the 1990's), so to date, nobody has rewritten his solution in entirely algebraic form within the framework of Kleene algebras anywhere in the literature ... as far as I'm aware. Whoever accomplishes this will have, as a result, a core element of an intelligent regular expression synthesis compiler, but also of a reverse-compiler and programming language synthesis de-compiler. Essentially, with something like that on hand, you'd be able to read code straight from binary and the lid will be blown off the world of proprietary systems. [Bite tongue, bite tongue, mustn't reveal secret yet, must keep the ring hidden.]

Pytorch: how to repeat a parameter matrix into a bigger one along both dimensions?

What is the simplest syntax to transform 2D parameter tensor
A B
C D
into
A A B B
A A B B
C C D D
C C D D
Note they are parameter tensors, so I need autograd to back propagate gradient from latter into former.
Thanks!
I found a numpy.repeat()-like function in latest pytorch (1.1), but it is needed to be called twice:
z = x.repeat_interleave(2,dim=0).repeat_interleave(2,dim=1)
using einops (same code works with numpy and pytorch):
z = einops.repeat(x, 'i j -> (i 2) (j 2)')

solve for elimination constant K

Can someone show me how to isolate K in the following equation? (I want to use excel to find K, and I will know a,b,c,d,and f, so I need K isolated):
a = (b/c * exp(-K*d)+a)*exp(-Kf)

Prove that the following language is not a context-free

L = {a^i b^j c^k; i≠j and i≠k and j≠k}.
First approach: I tried two different string to prove it by pumping lemma but non of them is correct.
first w = a^m b^m+1 c^m+2 and m is pumping length. for example one case in
w = uvxyz is that vxy in is a part. so w = a^m-k a^k b^m+1 c^m+2 for any i >=0 it has to be in the L wi = a^m-k a^ik b^m+1 c^m+2. I cant show that number of a's is equal to number of b's.
Second approach: I converted L into union of 6 different languages {a^ib^jc^k U a^ib^kc^j U a^jb^ic^k U a^jb^kc^i U a^kb^ic^j U a^kb^jc^i ; i
I found the answer.
If we pick W= a^m! b^(m+1)! c^(m+2)! then we can prove it.
I am solving for the case that uxy is in a's side. then
W= a^m!-k a^k b^(m+1)! c(m+2)! we know that there exist an integer x = m! m / k for 1<= k <= m so we pick i = (1+x) then m! + k x = m! + m! m = m! (m+1) = (m+1)! means the number of a’s = number of b’s which means this string is not in L. This is a contradiction.

Levenshtein distance cost

I am new to haskell and I encountered a performance issue that is so grave that it must be my code and not the haskell platform.
I have a python implementation of the Levenshtein distance (own code) and I passed (or tried to do so) this to haskell. The result is the following:
bool2int :: Bool -> Int
bool2int True = 1
bool2int False = 0
levenshtein :: Eq a => [a] -> [a] -> Int -> Int -> Int
levenshtein u v 0 0 = 0
levenshtein u v i 0 = i
levenshtein u v 0 j = j
levenshtein u v i j = minimum [1 + levenshtein u v i (j - 1),
1 + levenshtein u v (i - 1) j,
bool2int (u !! (i - 1) /= v !! (j - 1) ) + levenshtein u v (i - 1) (j - 1) ]
distance :: Eq a => [a] -> [a] -> Int
distance u v = levenshtein u v (length u) (length v)
Now, the difference in execution time for strings of length 10 or more is of various powers of 10 between python and haskell. Also with some rough time measuring (wall clock, as I haven't found a clock() command in haskell so far) it seems that my haskell implementation has not cost O(mn), but some other exorbitantly fast growing cost.
Nota bene: I do not want my haskell implementation to compete speed wise with the python script. I just want it to run in a "sensible" time and not in multiples of the time the whole universe exists.
Questions:
What am I doing wrong, that my implementation is so darn slow?
How to fix it?
Talking about "lazy evaluation": I gather that if levenshtein "cat" "kit" 2 2 is called thrice, it is only calculated once. Is this right?
There must be something built-in for my bool2int, right?
Any other input is highly appreciated if it shoves me ahead on the rough path to mastering haskell.
EDIT: Here goes the python code for comparison:
#! /usr/bin/python3.2
# -*- coding, utf-8 -*-
class Levenshtein:
def __init__ (self, u, v):
self.__u = ' ' + u
self.__v = ' ' + v
self.__D = [ [None for x in self.__u] for x in self.__v]
for x, _ in enumerate (self.__u): self.__D [0] [x] = x
for x, _ in enumerate (self.__v): self.__D [x] [0] = x
#property
def distance (self):
return self.__getD (len (self.__v) - 1, len (self.__u) - 1)
def __getD (self, i, j):
if self.__D [i] [j] != None: return self.__D [i] [j]
self.__D [i] [j] = min ( [self.__getD (i - 1, j - 1) + (0 if self.__v [i] == self.__u [j] else 1),
self.__getD (i, j - 1) + 1,
self.__getD (i - 1, j) + 1] )
return self.__D [i] [j]
print (Levenshtein ('first string', 'second string').distance)
What am I doing wrong, that my implementation is so darn slow?
Your algorithm has exponential complexity. You seem to be assuming that the calls are being memoized for you, but that's not the case.
How to fix it?
You'll need to add explicit memoization, possibly using an array or some other method.
Talking about "lazy evaluation": I gather that if levenshtein "cat" "kit" 2 2 is called thrice, it is only calculated once. Is this right?
No, Haskell does not do automatic memoization. Laziness means that if you do let y = f x in y + y, then the f x will only be evaluated (once) if the result of the sum is demanded. It does not mean that f x + f x will evaluate in only one call to f x. You have to be explicit when you want to share results from subexpressions.
There must be something built-in for my bool2int, right?
Yes, there is an instance Enum Bool, so you can use fromEnum.
*Main> fromEnum True
1
*Main> fromEnum False
0
Any other input is highly appreciated if it shoves me ahead on the rough path to mastering haskell.
While writing stuff from scratch may be fun and educational, it is important to learn to take advantage of the many great libraries on Hackage when doing common things like this.
For example there is an implementation of the Levenshtein distance in the edit-distance package.
I translated your Haskell code back to Python for comparison:
def levenshtein(u, v, i, j):
if i == 0: return j
if j == 0: return i
return min(1 + levenshtein(u, v, i, (j-1)),
1 + levenshtein(u, v, (i-1), j),
(u[i-1] != v[j-1]) + levenshtein(u, v, (i-1), (j-1)))
def distance(u, v):
return levenshtein(u, v, len(u), len(v))
if __name__ == "__main__":
print distance("abbacabaab", "abaddafaca")
Even without fixing the O(n) indexing issue that chrisdb pointed out in his answer, this performs slower than the Haskell version when compiled:
$ time python levenshtein.py
6
real 0m4.793s
user 0m4.690s
sys 0m0.020s
$ time ./Levenshtein
6
real 0m0.524s
user 0m0.520s
sys 0m0.000s
Of course, they both lose to the properly memoized version in the edit-distance package:
$ time ./LevenshteinEditDistance
6
real 0m0.015s
user 0m0.010s
sys 0m0.000s
Here's a simple memoized implementation using Data.Array:
import Data.Array
distance u v = table ! (m, n)
where table = listArray ((0, 0), (m, n)) [levenshtein i j | i <- [0..m], j <- [0..n]]
levenshtein 0 j = j
levenshtein i 0 = i
levenshtein i j = minimum [ 1 + table!(i, j-1)
, 1 + table!(i-1, j)
, fromEnum (u'!(i-1) /= v'!(j-1)) + table!(i-1, j-1) ]
u' = listArray (0, m-1) u
v' = listArray (0, n-1) v
m = length u
n = length v
main = print $ distance "abbacabaab" "abaddafaca"
It performs similarly to your original Python code:
$ time python levenshtein-original.py
6
real 0m0.037s
user 0m0.030s
sys 0m0.000s
$ time ./LevenshteinArray
6
real 0m0.017s
user 0m0.010s
sys 0m0.000s
It looks to me like the likely cause is the use of !! for random access. Haskell lists are linked lists, which are poorly suited to algorithms that require random rather than sequential access.
You might want to try replacing the lists with something better suited to random access. If you are interested in strings you could use Data.Text or ByteStrings, which are underlyingly arrays and should be fast. Or you could use something like Data.Vector perhaps.
EDIT: Actually, it looks like Data.Text will have the same issue, since the documentation says indexing is O(n). Probably converting to a vector would be best.

Resources