Are there anyone help me how to write this polynomial into python language please, I’ve tried my best, but it’s too hard
P/s sorry for my bad grammar, i’m from vietnam
Assuming you simply want to be able to get y result for either one of those equations, you can just do the following:
import math
def y1(x):
return 4*(x*x + 10*x*math.sqrt(x)+3*x+1)
def y2(x):
return (math.sin(math.pi*x*x)+math.sqrt(x*x+1))/(exp(2*x)+math.cos(math.pi/4*x))
If you want to evaluate y1 or y2 given a certain x, just use y1(x), for example:
print(y1(10))
print(y2(10))
If you want to be able to plot those equations in python, try using the python turtle module to do so.
Related
I've written a large program, with dependencies on libraries written in my lab. I'm getting wrong (and somewhat random) results, which are caused by floating-point errors.
I would like to do some python magic and change all floats to decimals, or some other more precise type.
I can't write the full code here, but following is the general flow -
def run(n):
...
x = 0.5 # initializes as float
for _ in range(n):
x = calc(x)
...
return x
What I'm trying to avoid is to go over all initialization in the code and add a manual cast to decimal.
Is there a trick I can do to make python initialize all floats in lines such as x = 0.5 as decimals? or perhaps use a custom interpreter which has more exact floats?
Thanks,
I can't post the full code, hope my edit makes it clearer.
I think you can use this:
from decimal import Decimal
Decimal(variable)
Apologies in advance for how vague this question is (unfortunately I don't know enough about how jax tracing works to phrase it more precisely), but: Is there a way to completely insulate a function or code block from jax tracing?
For context, I have a function of the form:
def f(x, y):
z = h(y)
return g(x, z)
Essentially, I want to call g(x, z), and treat z as a constant when doing any jax transformations. However, setting up the argument z is very awkward, so the helper function h is used to transform an easier-to-specify input y into the format required by g. What I'd like is for jax to treat h as a non-traceable black box, so that doing jit(lambda x: f(x, y0)) for a particular y0 is the same as first computing z0 = h(y0) with numpy, then doing jit(lambda x: g(x, z0)) (and similar with grad or whatever other function transformations).
In my code, I've already written h to only use standard numpy (which I thought might lead to black-box behaviour), but the compile time of jit(lambda x: f(x, y0)) is noticeably longer than the compile time of jit(lambda x: g(x, z0)) for z0 = h(y0). I have a feeling the compile time may have something to do with jax tracing the many loops in h, though I'm not sure.
Some additional notes:
Writing h in a jax-friendly way would be awkward (input formatting is ragged, tons of looping/conditionals, output shape dependent on input value, etc) and ultimately more trouble than it's worth as the function is extremely cheap to execute, and I don't ever need to differentiate it (the input data is integer-based).
Thoughts?
Edit addition for clarity: I know there are maybe ways around this if, e.g. f is a top-level function. In this case it isn't such a big deal to get the user to call h first to "pre-compile" the jax-friendly inputs to g, then freely perform whatever jax transformations they want to lambda x: g(x, z0). However, I'm imagining cases in which we have many functions that we want to chain together, that have the same structure as f, where there are some jax-unfriendly inputs/computations, but these inputs will always be treated as constant to the jax part of the computation. In principle one could always pull out these pre-computations to set up the jax stuff, but this seems difficult if we have a non-trivial collection of functions of this type that will be calling each other.
Is there some way to control how f gets traced, so that while tracing it knows to just evaluate z=h(y) (instead of tracing h) then continue with tracing g(x, z)?
f_jitted = jax.jit(f, static_argnums=1)
static_argnums parameter probably could help
https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html
You can use transformation parameters such as static_argnums for jit to avoid tracing particular arguments of transformed functions, though at the cost of more recompiles.
I tried searching in the forum and I was not successful in finding the reason why the answers in python are different than in Matlab. I am trying to use sind() function in Matlab, where the user input is in degrees. The Matlab snippet is,
angle = 27;
b = sind(angle)
This gives b as 0.4540.
The equivalent code in python
angle = 27;
b = math.degrees(math.sin(angle))
I get b as 54.79.
I can't able to fix the problem and any inputs would be highly appreciable.
Best Regards
Pradeep
This is a unit issue. In python, math.sin() assumes radians, not degrees. The MATLAB function sind specifies degrees. So you need to convert your angle into radians, then take the sine.
Here's the python you need:
math.sin(math.radians(27))
I'm reprogramming an orbital analysis program that I wrote in MATlab in Python 3.7. The initial inputs of velocity and position are queried user inputs. The method I'm using currently is clunky feeling (I am a python beginner) and I'm wondering if someone can suggest a more elegant method to take this input vector as a numpy float64? I suspect this problem is trivial but I haven't found a clear answer yet...
The current input is a vector with the syntax: "i,k,j". no spaces, comma delimited. Each component is converted to a float64 in a list via list(map(float, input)), I then have to convert it back to numpy float64 in order to use r as a vector later on.
v = np.float64(list(map(np.float64,input('Query Text').split(','))))
I'd say that's pretty elegant already. I'd do it like this if you like it better:
np.float64(
[np.float64(i) for i in input("Query text").split(",")]
)
but i wouldn't say this is much more elegant, but at least it does the same thing.
I am trying to Maximize 10 very long linear equations, All equations are similar except for one variable (say Z).
I was thinking of putting a single equation inside a function and pass Z as a parameter.
Can we optimise the python functions?
I have looked into pyomo,pulp,cvxpy documentations and haven't found any code samples. which makes me think it this isn't possible
#This is what currently it is
Maximize
(X*fun(1,Z)) + (X2*fun(1,Z)) + ...
(X*fun(1,Z1)) + (X2*fun(1,Z1)) + ...
..
..
Solve for
X1 and X2
#This an example what I am trying to do
Def optimise(Z):
(X*fun(1,Z)) + (X2*fun(1,Z)) + ...
Maximize
optimise(13)
optimise(24)
optimise(34)
optimise(14)
optimise(12)
optimise(11) #is optimizing with funtions possible ?
Solve for
X1 and X2
It depends on what your function returns. Pyomo is an algebraic modeling language and requires access to the full algebraic equations. If your Python function returns an expression involving Pyomo Var components then it should work. If the function simply returns a value depending on the current value of a Pyomo Var then it will not work. You need to provide more details on the function and the model you're trying to solve for us to say for sure if it's supported or not.