How to simplify polynomials in sqrt() to its absolute value of factor in maxima? - simplify

sqrt(a^2+2*a+1) can be easily rewritten as |a+1|. I would like to do this in maxima, however cannot make it work. Although sqrt(a^2) is automatically simplified to |a|, sqrt(a^2+2*a+1) is not. And radcan(sqrt(a^2+2*a+1)) give a+1, which is incorrect. Is there anyway to get the right simplification in Maxima?

Yep. Basically, you just have to tell Maxima to try a bit harder to factorise the inside of the square root. For example:
(%i1) x: sqrt(a^2 + 2*a + 1);
2
(%o1) sqrt(a + 2 a + 1)
(%i2) factor(a^2 + 2*a + 1);
2
(%o2) (a + 1)
(%i3) map (factor, x);
(%o3) abs(a + 1)
(%i4)
The map here means that the function factor should be applied to each of the arguments of sqrt. What happens is that you get sqrt((a+1)^2) appear on the way, and this is automatically simplified to abs(a+1).
Note that the answer from radcan is correct for some values of a. As I understand it, this is all that radcan guarantees: it's useful for answering "Yikes! Is there a simpler way to think about this crazy expression?", but not particularly helpful for "Hmm, I'm not sure what the variables in this are. Is there a simpler form?"

Related

erase common constant of equation using leanprover

I want to prove below goal.
n_n: ℕ
n_ih: n_n * (n_n + 1) / 2 = arith_sum n_n
⊢ (n_n + 1) * (n_n + 1 + 1) / 2 = n_n + 1 + n_n * (n_n + 1) / 2
ring, simp, linarith is not working.
Also I tried calc, but it too long.
Is there any automatic command to erase common constant in equation?
I would say that you were asking the wrong question. Your hypothesis and goal contain / but this is not mathematical division, this is a pathological function which computer scientists use, which takes as input two natural numbers and is forced to return a natural number, so often can't return the right answer. For example 5 / 2 = 2 with the division you're using. Computer scientists call it "division with remainder" and I call it "broken and should never be used". When I'm doing this sort of exercise with my class I always coerce everything to the rationals before I do the division, so the division is mathematical division rather than this pathological function which does not satisfy things like (a / b) * b = a. The fact that this division doesn't obey the rules of normal division is why you can't get the tactics to work. If you coerce everything to the rationals before doing the division then you won't get into this mess and ring will work fine.
If you do want to persevere down the natural division road then one approach would be to start doing things proving that n(n+1) is always even, so that you can deduce n(n+1)/2)*2=n(n+1). Alternatively you could avoid this by observing that to show A/2=B/2 it suffices to prove that A=B. But either way you'll have to do a few lines of manual fiddling because you're not using mathematical functions, you're using computer science approximations, so mathematical tactics don't work with them.
Here's what my approach looks like:
import algebra.big_operators
open_locale big_operators
open finset
def arith_sum (n : ℕ) := ∑ i in range n, (i : ℚ) -- answer is rational
example (n : ℕ) : arith_sum n = n*(n-1)/2 :=
begin
unfold arith_sum,
induction n with d hd,
{ simp },
{ rw [finset.sum_range_succ, hd, nat.succ_eq_add_one],
push_cast,
ring, -- works now
}
end

Why is SymPy's linsolve function not able to solve for the second to the last variable in this problem?

(TLDR: SymPy's linsolve function is unable to solve the system of linear equations, generated from applying the finite difference method to an ODE BVP, when you pass the equations as a pure Python list, but is able to do so putting said equations list inside SymPy's Matrix function. This could be a bug that needs to be fixed especially considering that in the SymPy documentation, the example that they give you has them passing a list as an argument to linsolve. )
I have a boundary-value-problem ordinary differential equation that I intend to solve using the finite difference method. My ODE, in SymPy representation, is x*y(x).diff(x,2) + y(x).diff(x) + 500 = 0, with y(1)=600 and y(3.5)=25. Entire code is as follows:
import sympy as sp
from numpy import *
from matplotlib.pyplot import *
y = sp.Function('y')
ti = time.time()
steps = 10
ys = [ y(i) for i in range(steps+1) ]
ys[0], ys[-1] = 600, 25
xs = linspace(1, 3.5, steps+1)
dx = xs[1] - xs[0]
eqns = [ xs[i]*(ys[i-1] - 2*ys[i] + ys[i+1])/dx**2 +
( ys[i+1] - ys[i-1] )/2/dx + 500
for i in range(1, steps) ]
ys[1:-1] = sp.linsolve(eqns, ys[1:-1]).args[0]
scatter(xs, ys)
tf = time.time()
print(f'Time elapsed: {tf-ti}')
For ten steps, this works just fine. However, if I go higher than that immediately like 11 steps, SymPy is no longer able to solve the system of linear equations. Trying to plot the results throws a TypeError: can't convert expression to float. Examining the list of y values ys reveals that one of the variables, specifically the second to the last one, wasn't being solved for by sympy's linsolve, but instead the solutions to the other variables are being solved in terms of this unsolved variable. For example, if I have 50 steps, the second-to-the last variable is y(49) and this is featured on the solutions of the other unknowns and not solved for e.g. 2.02061855670103*y(49) - 26.1340206185567. In contrast, in another BVP ODE that I solved, y(x).diff(x,2) + y(x).diff(x) + y(x) - 1 with y(0)=1.5 and y(3)=2.5, it doesn't have any issue whether I want 10, 50, or 200 steps. It solves all the variables just fine, but this seems to be a peculiar exception as I encountered the aforementioned issue with many other ODEs.
SymPy's inconsistency here was quite frustrating. The only consolation is that before I ran into this problem, I have actually solved it a few times already with the varying number of steps that I wanted. I had enclosed the eqns variable inside sympy's Matrix function as in ys[1:-1] = sp.linsolve(sp.Matrix(eqns), ys[1:-1]).args[0] simply because it displayed better that way in the terminal. But for solving in a script file, I thought that wrapping it inside sp.Matrix is unnecessary and I naturally removed it to simplify things.
It is polite when formatting a question for SO (or anywhere else) to provide a complete code example without missing imports etc. Also you should distil the problem down to the minimum case and remove all of the unnecessary details. With that in mind a better way to demonstrate the issue is by actually figuring out what the arguments to linsolve are and presenting them directly e.g.:
from sympy import *
y = Function('y')
eqns = [
-47.52*y(1) + 25.96*y(2) + 13436.0,
25.96*y(1) - 56.32*y(2) + 30.36*y(3) + 500,
30.36*y(2) - 65.12*y(3) + 34.76*y(4) + 500,
34.76*y(3) - 73.92*y(4) + 39.16*y(5) + 500,
39.16*y(4) - 82.72*y(5) + 43.56*y(6) + 500,
43.56*y(5) - 91.52*y(6) + 47.96*y(7) + 500,
47.96*y(6) - 100.32*y(7) + 52.36*y(8) + 500,
52.36*y(7) - 109.12*y(8) + 56.76*y(9) + 500,
56.76*y(8) - 117.92*y(9) + 61.16*y(10) + 500,
61.16*y(9) - 126.72*y(10) + 2139.0
]
syms = [y(1), y(2), y(3), y(4), y(5), y(6), y(7), y(8), y(9), y(10)]
print(linsolve(eqns, syms))
Here you hoped to get a simple numerical solution for each of the unknowns but instead the result returned (from SymPy 1.8) is:
FiniteSet((5.88050359812056*y(10) - 5.77315239260531, 10.7643116711359*y(10) - 528.13328974178, 14.9403214726998*y(10) - 991.258097567359, 9.85496358613721e+15*y(10) - 1.00932650309452e+18, 7.35110502818395*y(10) - 312.312287998229, 5.84605452313345*y(10) - 217.293922525318, 4.47908204606922*y(10) - 141.418192750506, 3.22698120573309*y(10) - 81.4678489766327, 2.07194244604317*y(10) - 34.9738391105298, 1.0*y(10)))
The linsolve function will return a solution involving one or more unknowns if the system does not have a unique solution. Note also that there are some large numbers like 9.85496358613721e+15 which suggests that there might be numerical problems here.
Actually this is a bug in SymPy and it has already been fixed on the master branch:
https://github.com/sympy/sympy/pull/21527
If you install SymPy from git then you can find the following as output instead:
FiniteSet((596.496767861074, 574.326903264955, 538.901024315178, 493.575084012669, 440.573815245681, 381.447789421181, 317.320815173574, 249.033388036155, 177.23053946471, 102.418085492911))
Also note that it is generally better to avoid using floats in SymPy as it is a library that is designed for exact symbolic computation. Solving a system of floating point equations like this can be done much more efficiently using NumPy or some other fixed precision floating point library that can use BLAS/LAPACK routines. To use rational arithmetic with sympy here you just need to change your linspace line to
xs = [sp.Rational(x) for x in linspace(1, 3.5, steps+1)]
which will then work fine with SymPy 1.8 and is in fact faster (at least if you have gmpy2 installed).

Are the Fibonacci series a Dynamic-programming problem?

I'm talking about the problem of calculating the n-th fibonacci number.
Some users here say that it is in fact a DP problem, (please see the first answer to this question and the comments of the same answer What is dynamic programming?) but others say that it isn't because it doesn't optimize anything and because of other reasons, so is it or not?
From Wikipedia page of dynamic programming,
var m := map(0 → 0, 1 → 1)
function fib(n)
if key n is not in map m
m[n] := fib(n − 1) + fib(n − 2)
return m[n]
This technique of saving values that have already been calculated is called memoization; this is
the top-down approach, since we first break the problem into subproblems and then calculate and
store values.
So, it's one of the techniques used to get the Nth number in the sequence.
EDIT - For the added question, memoization is a form of DP.

Induction proof of correctness of fibonacci function

Haskell implementation of the familiar Fibonacci function
fibSlow n
| n == 0 = 1 --fib.1
| n == 1 = 1 --fib.2
| otherwise = fibSlow(n-1) + fibSlow(n-2) --fib.3
What is the induction proof of correctness for fibSlow?
To prove correctness of a function on the natural numbers by induction, you would show that it's correct for certain base cases, and then that it's correct for higher values of the parameter given the assumption that it's correct for lower ones. So you'd verify first that fibSlow 0 = 1, and then that fibSlow 1 = 1, and then that for n > 1, fibSlow n is equal to the (n-1)th fibonacci number plus the (n-2)th fibonacci number. Here you get to assume that those numbers are fibSlow (n-1) and fibSlow (n-2), since fibSlow is correct for all inputs less than n by the inductive hypothesis.
This might seem all rather trivial... because it is! The whole point of such an example in Haskell is that you can write code that's obviously correct. When you go to prove it correct, the proof just writes itself and amounts to looking at the code and noting that it clearly says exactly what you're trying to prove. This is one of the nice properties of a declarative language like Haskell.
Apologies I haven't formally seen this kind of material for a while, so you're probably best looking at other sources if this is homework.
I think you want to show the existence of a monotone function which describes the "progress" of the recursion. This case should be pretty simple: the argument itself is monotonically decreasing. For a nonnegative n, the recursive call will be made with a lesser n', and that n' will never be less than zero.
You can also use power induction to argue the function is defined on all n. You have declared it defined on 0 and 1, and it suffices to say that if it's defined on n and n+1, then it's defined on n+2. This is obvious by the definition of the recursive call.
I think you might be able to read up on some formalities in Jech's Set Theory book, in the Ordinals chapter.

Explain this DSP notation

I'm trying to implement this extenstion of the Karplus-Strong plucked string algorithm, but I don't understand the notation there used. Maybe it will take years of study, but maybe it won't - maybe you can tell me.
I think the equations below are in the frequency domain or something. Just starting with the first equation, Hp(z), the pick direction lowpass filter. For one direction you use p = 0, for the other, perhaps 0.9. This boils down to to 1 in the first case, or 0.1 / (1 - 0.9 z-1) in the second.
alt text http://www.dsprelated.com/josimages/pasp/img902.png
Now, I feel like this might mean, in coding terms, something towards:
H_p(float* input, int time) {
if (downpick) {
return input[time];
} else {
return some_function_of(input[t], input[t-1]);
}
}
Can someone give me a hint? Or is this futile and I really need all the DSP background to implement this? I was a mathematician once...but this ain't my domain.
So the z-1 just means a one-unit delay.
Let's take Hp = (1-p)/(1-pz-1).
If we follow the convention of "x" for input and "y" for output, the transfer function H = y/x (=output/input)
so we get y/x = (1-p)/(1-pz-1)
or (1-p)x = (1-pz-1)y
(1-p)x[n] = y[n] - py[n-1]
or: y[n] = py[n-1] + (1-p)x[n]
In C code this can be implemented
y += (1-p)*(x-y);
without any additional state beyond using the output "y" as a state variable itself. Or you can go for the more literal approach:
y_delayed_1 = y;
y = p*y_delayed_1 + (1-p)*x;
As far as the other equations go, they're all typical equations except for that second equation which looks like maybe it's a way of selecting either HΒ = 1-z-1 OR 1-z-2. (what's N?)
The filters are kind of vague and they'll be tougher for you to deal with unless you can find some prepackaged filters. In general they're of the form
H = H0*(1+az-1+bz-2+cz-3...)/(1+rz-1+sz-2+tz-3...)
and all you do is write down H = y/x, cross multiply to get
H0 * (1+az-1+bz-2+cz-3...) * x = (1+rz-1+sz-2+tz-3...) * y
and then isolate "y" by itself, making the output "y" a linear function of various delays of itself and of the input.
But designing filters (picking the a,b,c,etc.) is tougher than implementing them, for the most part.

Resources