When trying to get deflection equation using sympy beam module in continuum mechanics, i get an error if I use float in location argument
from sympy.physics.continuum_mechanics.beam import Beam
from sympy import symbols, Piecewise
E, I = symbols ( 'E, I')
b = Beam (30, E, I)
b.apply_support (0, 'roller')
b.apply_support (10 , 'roller')
b.apply_support (30, 'roller')
b.apply_load (-10, 5, -1) ## if 5. is changed to 5, deflection equation works?
b.apply_load (-10, 15, -1) ## if 5. is changed to 5, deflection equation works?
R_0, R_10, R_30 = symbols ('R_0, R_10, R_30')
b.solve_for_reaction_loads (R_0, R_10, R_30)
b.load
b.shear_force ()
b.plot_shear_force ()
b.deflection ()
Does anyone know if above commented lines are valid, or I have to convert argument values into integers?
Yes, you may have to use integers otherwise (as in this case) an inconsistent set of equation might result which has no solution. In your case the equations to be solved are
[C4, 10*C3 + C4 + 156.25, 30*C3 + C4 + 468.749999999996]
Because of rounding errors, the value of C3 calculated from the second equation is not the same as that from the 3rd so the EmptySet is returned as a solution.
Related
I use SymPy for symbolic calculations in Python and get e.g. an expression like
p**(-1.02)=-0.986873+3.62321E15*y**-.5
Is there a function in SymPy (e.g. in sympy.simplify?) to get something like
p= c + a*y
where c and a are constants
I tried the following result as below:
-1/p**1.02 + 3.62321e+15/y**0.5 - 0.986873
Your second equation appears to be the first rewritten to show negative exponents as positive and putting the power in the denominator. This is not linear in y so you cannot express the equation in the form c + a*y except as an approximation at a certain point.
So let's solve for p since that is what you are interested in:
l, r = p**(-1.02), -0.986873+3.62321E15*y**-.5
il, ir = 1/l, 1/r
eq_p = root(ir, il.exp)
Getting a series approximation for eq_p can be done if you use Rationals instead of floats. You must also chose a value at which you want the approximation. Let's get a linear approximation near y = 0.3 -- this corresponds to requesting n=2:
>>> rational_eq_p = nsimplify(eq_p, rational=True)
>>> p_3_10 = series(rational_eq_p, y, 0.3, n=2).removeO(); str(p_3_10)
5.04570930197125e-16*y + 1.57426130221503e-16
You can verify that this is correct by checking to see that the value and slope at y=0.3 are consistent:
>>> p_3_10.subs(y,.3), eq_p.subs(y,0.3)
(3.08797409280641e-16, 3.08797409280641e-16)
>>> p_3_10.diff(y), eq_p.diff(y).subs(y,0.3)
(5.04570930197125e-16, 5.04570930197125e-16)
So now you have a linear approximation for your equation at y = 0.3
from sympy import *
s = Symbol("s")
y = Symbol("y")
raw_function = 1/(150.0-0.5*y)
result = integrate(raw_function, (y, 0, s)
The above snippet gets a wrong result: -2.0*log(0.5*s - 150.0) + 10.0212705881925 + 2.0*I*pi,
but we can know the right result is -2.0*log(-0.5*s + 150.0) + 10.0212705881925, so what's wrong?
Are you sure about the correct result, WolframAlpha says it is the same as Sympy here.
Edit:
This function diverges (and the integral too) around y=300, see its plot here (it diverges the same way as 1/x does but offset to y=300)
Meaning that you are constrained to s < 300 to have a well defined (and finite) integrale. In that range, the value of the integral is equal to what sympy is providing you.
f = #(x)(abs(x))
fplot(f, [-1, 1]
Freshly installed octave, with no configuration edited. It results in the following image, where it looks as if it is constant for a while around 0, looking more like a \_/ than a \/:
Why does it look so different from a usual plot of the absolute value near 0? How can this be fixed?
Since fplot is written in Octave it is relatively easy to read. Its location can be found using the which command. On my system this gives:
octave:1> which fplot
'fplot' is a function from the file /usr/share/octave/5.2.0/m/plot/draw/fplot.m
Examining fplot.m reveals that the function to be plotted, f(x), is evaluated at n equally spaced points between the given limits. The algorithm for determining n starts at line 192 and can be summarised as follows:
n is initially chosen to be 8 (unless specified differently by the user)
Construct a vector of arguments using a coarser grid of n/2 + 1 points:
x0 = linspace (limits(1), limits(2), n/2 + 1)'
(The linspace function will accept a non-integer value for the number of points, which it rounds down)
Calculate the corresponding values:
y0 = f(x0)
Construct a vector of arguments using a grid of n points:
x = linspace (limits(1), limits(2), n)'
Calculate the corresponding values:
y = f(x0)
Construct a vector of values corresponding to the members of x but calculated from x0 and y0 by linear interpolation using the function interp1():
yi = interp1 (x0, y0, x, "linear")
Calculate an error metric using the following formula:
err = 0.5 * max (abs ((yi - y) ./ (yi + y + eps))(:))
That is, err is proportional to the maximum difference between the calculated and linearly interpolated values.
If err is greater than tol (2e-3 unless specified by the user) then put n = 2*(n-1) and repeat. Otherwise plot(x,y).
Because abs(x) is essentially a pair of straight lines, if x0 contains zero then the linearly interpolated values will always exactly match their corresponding calculated values and err will be exactly zero, so the above algorithm will terminate at the end of the first iteration. If x doesn't contain zero then plot(x,y) will be called on a set of points that doesn't include the 'cusp' of the function and the strange behaviour will occur.
This will happen if the limits are equally spaced either side of zero and floor(n/2 + 1) is odd, which is the case for the default values (limits = [-5, 5], n = 8).
The behaviour can be avoided by choosing a combination of n and limits so that either of the following is the case:
a) the set of m = floor(n/2 + 1) equally spaced points doesn't include zero or
b) the set of n equally spaced points does include zero.
For example, limits equally spaced either side of zero and odd n will plot correctly . This will not work for n=5, though, because, strangely, if the user inputs n=5, fplot.m substitutes 8 for it (I'm not sure why it does this, I think it may be a mistake). So fplot(#abs, [-1, 1], 3) and fplot(#abs, [-1, 1], 7) will plot correctly but fplot(#abs, [-1, 1], 5) won't.
(n/2 + 1) is odd, and therefore x0 contains zero for symmetrical limits, only for every 2nd even n. This is why it plots correctly with n=6 because for that value n/2 + 1 = 4, so x0 doesn't contain zero. This is also the case for n=10, 14, 18 and so on.
Choosing slightly asymmetrical limits will also do the trick, try: fplot(#abs, [-1.1, 1.2])
The documentation says: "fplot works best with continuous functions. Functions with discontinuities are unlikely to plot well. This restriction may be removed in the future." so it is probably a bug/feature of the function itself that can't be fixed except by the developers. The ordinary plot() function works fine:
x = [-1 0 1];
y = abs(x);
plot(x, y);
The weird shape comes from the sampling rate, i.e. at how many points the function is evaluated. This is controlled by the parameter N of fplot The default call seems to accidentally skip x=0, and with fplot(#abs, [-1, 1], N=5) I get the same funny shape like you:
However, trying out different values of N can yield the correct shape, try e.g. fplot(#abs, [-1, 1], N=6):
Although in general I would suggest to use way higher numbers, like N=100.
I am trying to do chi-square test using this statistics package function. I have following contingency table:
A B
True: 12 8
False: 16 9
I used following code:
import Data.Vector
import Statistics.Test.ChiSquared
sample = fromList [(12, 8), (16, 9)]
main = print(chi2test(sample))
However, it gives following error:
[1 of 1] Compiling Main ( rnchisq.hs, rnchisq.o )
rnchisq.hs:9:23: error:
• Couldn't match expected type ‘Int’
with actual type ‘Vector (Integer, Integer)’
• In the first argument of ‘chi2test’, namely ‘(sample)’
In the first argument of ‘print’, namely ‘(chi2test (sample))’
In the expression: print (chi2test (sample))
Where is the problem and how can it be solved? Thanks for your help.
Edit: As suggested in the answer by #JosephSible I also tried:
main = print(chi2test(1, sample))
(1 being degree of freedom)
But here I get error:
rnchisq.hs:7:22: error:
• Couldn't match expected type ‘Int’
with actual type ‘(Integer, Vector (Integer, Integer))’
• In the first argument of ‘chi2test’, namely ‘(1, sample)’
In the first argument of ‘print’, namely ‘(chi2test (1, sample))’
In the expression: print (chi2test (1, sample))
Following compiled and ran:
main = print $ chi2test 1 sample
However, the output is
Nothing
I expected some value. It remains Nothing even if I drastically change numbers in sample. Why am I getting Nothing?
The chi2test function performs a general chi-square goodness-of-fit test, not a chi-square test on a 2x2 contingency table. It expects a set of pairs representing the "observed" actual counts and the "expected" theoretical mean counts under the null hypothesis, rather than just the counts from the table.
In other words, you need to work through a fair bit of statistical theory to use this function to analyse a 2x2 table, but here's a function that appears to work:
import Data.Vector as V
import Statistics.Test.ChiSquared
sample = ((12, 8), (16, 9))
main = print $ chi2table sample
chi2table ((a,b), (c,d))
= chi2test 2 $ V.fromList $ Prelude.zip [a,b,c,d] [ea,eb,ec,ed]
where n = a + b + c + d
ea = expected (a+b) (a+c)
eb = expected (a+b) (b+d)
ec = expected (c+d) (a+c)
ed = expected (c+d) (b+d)
expected rowtot coltot = (rowtot * coltot) `fdiv` n
fdiv x y = fromIntegral x / fromIntegral y
This gives output:
> main
Just (Test {testSignificance = mkPValue 0.7833089019485086,
testStatistics = 7.56302521008404e-2, testDistribution = chiSquared 2})
Update: With respect to the degrees of freedom, the test itself is calculated using a chi-square with 1 degree of freedom (basically (R-1)*(C-1) for R and C the number of rows and columns of the table). The reason we have to specify 2 here is that the 2 represents the number of degrees of freedom "lost" or "constrained" in addition to the total count. We start with 4 degrees of freedom total, we lose one for the total count across all cells, and we are constrained to lose two more to get down to the 1 degree of freedom for the test.
Anyway, this will match the output of statistical software only if you turn off continuity correction. For example, in R:
> chisq.test(rbind(c(12,8),c(16,9)), correct=FALSE)
Pearson's Chi-squared test
data: rbind(c(12, 8), c(16, 9))
X-squared = 0.07563, df = 1, p-value = 0.7833
>
chi2test takes two arguments, and you're only passing it one. Instead of calling chi2test sample, call chi2test df sample, where df is the number of additional degrees of freedom.
Hi I am working with both the SLSQP solver on python and diffev2 which is part of the mystic package.
For multiple parameters the format shown below works:
bnds = ((0, 1e3), (0, 1e-4))
optimize.minimize(Error, [1e-8, 1e-7], args=(E, Bt, L, dt, TotT, nz, Co, Exmatrix), method = 'SLSQP', bounds = bnds)
I want to optimize only one parameter and that is when I run into the error: SLSQP Error: the length of bounds is not compatible with that of x0.
I use the syntax shown below:
bnds = ((1e-9, 1e-2))
optimize.minimize(Error, [1e-8], args=(U, E, Bt, L, dt, TotT, nz, Co, Exmatrix), method = 'SLSQP', bounds = bnds)
I am not sure what is wrong, as I have only one tuple pair in the bnds variable and one guess, not sure what is wrong.
Straight from python's docs:
The trailing comma is required only to create a single tuple (a.k.a. a singleton); it is optional in all other cases. A single expression without a trailing comma doesn’t create a tuple, but rather yields the value of that expression. (To create an empty tuple, use an empty pair of parentheses: ().)
Working alternatives:
bnds = ((1e-9, 1e-2),)
bnds = [(1e-9, 1e-2)]
Internally, this happens:
bnds = ((1e-9, 1e-2))
np.array(bnds, float).shape
# (2,)
bnds = ((1e-9, 1e-2),)
np.array(bnds, float).shape
# (1, 2)
# and then N is compared to the size of the first dimension (2 vs. 1)
(And make sure you got a reason not to use minimize_scalar)