Some problem when I tried to use "odein" inside another function - python-3.x

I am trying to use "odein" in another function to assess some parameters. But the code gave a sign "restart: shell". I cannot understand what is happenning here.
def fun(x,ia,ib):
#### function that returns dy/dt
def model(y, t, a, b):
dydt = np.exp(a-b/R/T)
return dydt
#### initial condition
y0 = 0.0
#### temperature points
t = x/d
#### solve ODEs
a= ia
b = ib
y1 = odeint(modeldydt,y0,t,args=(a,b))
return y1
x=array([279,300,310,320,330,340,350])
y=array([0.0,0.1,0.2,0.3,0.6,0.7,0.8])
calc_model = Model(fun)
pars = TGA_model.make_params(ia=10, ib=1e4)
result = calc_model.fit(y, x=x)
And what I got is
=============================== RESTART: Shell ===============================
I would like to know if it is possible to net the functions together in python. If it is possible, you are very kind to tell me the mistake I have made. If it is not possible, you are still welcome to send me your suggestion. If nothing works in python, I should try some matlab code instead.
Thanks in advance!

Related

scipy.minimize : how to include a static data in the function

I try to generalize an optimization function using scipy.optimize.
Actually I write this function in this way:
def value_to_optimize(data):
data_set = np.genfromtxt('myfilepathinstaticmode', delimiter=',',skip_header=1)
doe = data_set[:,:-1]
new_data_set = np.vstack((np.array(doe),np.array(data)))
return result_of_another_function(new_data_set)
def new_data():
rst = minimize(value_to_optimize,[0,0])
return rst.x
the function I try to optimize is the first one. And to do that I use the second function that use "minimize" and a x0 for starting optimization.
As you can see my problem is comming from 'myfilepathinstaticmode'. I would like to generalize my function, like value-to_optimize(filename,data), but at this moment, I cannot apply optimize() on it because it is only working on numbers.
Any idea on how to write it in a generalized manner ?
I personally would read the data outside of the minimizer function and then hand over only that data into the method:
def value_to_optimize(data, doe):
new_data_set = np.vstack((np.array(doe),np.array(data)))
return result_of_another_function(new_data_set)
def new_data():
data_set = np.genfromtxt('myfilepathinstaticmode', delimiter=',',skip_header=1)
doe = data_set[:,:-1]
rst = minimize(value_to_optimize, ([0,0], doe))
return rst.x
EDIT: I'm not sure if I understood your question correctly. So, alternatively, a more flexible approach would be to use functools.partial to generate a method with the filename as a parameter which you can then hand over to your optimizer.
Something is working in this way. I'm not convinced about the robustess of the code. The solution: I defined the function value_to_optimize() inside the function new_data(). Like this, the parameter 'filename' is out of "value_to_optimize()" but is a kind of "global" assignment inside new_data().
def new_data(filename):
def value_to_optimize(data):
data_set = np.genfromtxt(filename, delimiter=',',skip_header=1)
doe = data_set[:,:-1]
new_data_set = np.vstack((np.array(doe),np.array(data)))
return result_of_another_function(new_data_set)
rst = minimize(value_to_optimize,[0,0])
return rst.x

Importing a function into another script Indexing error

I am importing one of my sub functions into my main script and I keep getting the following:
IndexError: list index out of range
I am implementing the same exact function calls in the script as the functions asks for. Does anyone know a way to check to see if the function is properly imported?
Below is a sample of the code I am working with
from file_with_function import function
[output1 output2 output3] = function(roll = 15, pitch = 30,
yaw = 45)
print('Argument List: '+ str(sys.argv))
rolla = int(sys.argv[1])
pitcha = int(sys.argv[2])
yawa = int(sys.argv[3])
def function(roll = 15 , pitch = 30 , yaw = 45):
#script is here
if __name__ =='__main__':
function(
roll = rolla,
pitch = pitcha,
yaw = yawa)
There is either an error in the function you're trying to import, or there is something wrong with the data that you are passing into the function. I don't think it's an import error, but without looking at the code I can't really help you.

%s function Python 3

I want to build a function for following code:
PlayDirection_encoder = LabelEncoder()
train["PlayDirection"] = direction_encoder.fit_transform(train["PlayDirection"])
train.PlayDirection.unique()
That's my current function:
def object(category):
%s = Label_Encoder() % (category + "_encoder")
train[category] = %s.fit_transform(train[category]) %(category + "_encoder")
len(train.category.unique())
object("PlayDirection")
UsageError: Line magic function `%s` not found.
I am running my code on Kaggle's server.
Do you know how to solve this problem?
calling function an object is really really bad idea. It's a reserved word in Python, so it might break everything in your code.
what do you want to achieve? your problem is not clear

getting "none" as result and don't uderstand why

I am learning how to program in python 3 and today i was praticing until i start to struggle with this.
I was trying to make a function to get to know the total square meters of wood that i'll use in one project, but i keep get the none result and i don't know why, even reading almost every post about it that i found here.
Anyway, here's the code:
from math import pi
def acirc(r):
pi*r**2
def madeiratotal(r1,r2,r3,r4,r5):
a = acirc(r1)
b = acirc(r2)
c = acirc(r3)
d = acirc(r4)
e = acirc(r5)
print (a+b+c+d+e)
madeiratotal(0.15,0.09,0.175,0.1,0.115)
I already try defining the "acirc" function inside the "madeiratotal" function, try to print all numbers separated and them suming then... I just don't know what else to do please help
You need to return value from acirc function otherwise return type is None
def acirc(r):
return pi*r**2

How to use non-top-level functions in parallelization?

I'd like to use multiprocessing in a rescource-heavy computation in a code I write, as shown in this watered-down example:
import numpy as np
import multiprocessing as multiproc
def function(r, phi, z, params):
"""returns an array of the timepoints and the corresponding values
(demanding computation in actual code, with iFFT and stuff)"""
times = np.array([1.,2.,3.])
tdependent_vals = r + z * times + phi
return np.array([times, tdependent_vals])
def calculate_func(rmax, zmax, phi, param):
rvals = np.linspace(0,rmax,5)
zvals = np.linspace(0,zmax,5)
for r in rvals:
func_at_r = lambda z: function(r, phi, z, param)[1]
with multiproc.Pool(2) as pool:
fieldvals = np.array([*pool.map(func_at_r, zvals)])
print(fieldvals) #for test, it's actually saved in a numpy array
calculate_func(3.,4.,5.,6.)
If I run this, it fails with
AttributeError: Can't pickle local object 'calculate_func.<locals>.<lambda>'
What I think the reason is, according to the documentation, only top-level defined functions can be pickled, and my in-function defined lambda can't. But I don't see any way I could make it a standalone function, at least without polluting the module with a bunch of top-level variables: the parameters are unknown before calculate_func is called, and they're changing at each iteration over rvals. This whole multiprocessing thing is very new to me, and I couldn't come up with an alternative. What would be the simplest working way to parallelize the loop over rvals and zvals?
Note: I used this answer as a starting point.
This probably isn't the best answer for this, but it's an answer, so please no hate :)
You can just write a top level wrapper function that can be serialized and have it execute functions... This is kinda like function inception a bit but I solved a similar problem in my code like this.
Here is a brief example
def wrapper(arg_list, *args):
func_str = arg_list[0]
args = arg_list[1]
code = marshal.loads(base64.b64decode(func_str.data))
func = types.FunctionType(code, globals(), "wrapped_func")
return func(*args)
def run_func(func, *args):
func_str = base64.b64encode(marshal.dumps(func.__code__, 0))
arg_list = [func_str, args]
with mp.Pool(2) as pool:
results = pool.map(wrapper, arg_list)
return results

Resources