How to specify value of theano.tensor.ivector? - theano

I would like to create a theano.tensor.ivector variable and specify its values. In most code examples on the internet, I find v = T.ivector(). This creates the tensor variable but don't specify its value.
I tried this :
import theano.tensor as T
val = [1,5]
v = T.ivector(value=val, name='v')
but I get the following error :
File "<stdin>", line 1, in <module>
TypeError: __call__() got an unexpected keyword argument 'value'

I think you may be a little confused about the use of tensors, as it isn't a traditional variable that you assign a value to on declaration. A tensor is really a placeholder variable with a specified format that you will use in a function later. Extending on your example:
import theano.tensor as T
from theano import function
val = [1, 5]
v = T.ivector('v')
f = function([v], [v]) # Create a function that just returns the input
# Evaluate the function
f(val)
In the above code we just create a function that takes the tensor v and returns it. The value is not assigned until we call the function f(val)
You may find the baby steps page of the documentation helpful

Related

How to know the column of the start of the function declaration in the source code inspected by getsourcelines?

getsourcelines returns the line where the function is defined.
However, if there are more than one function, like in the example below, it returns both, since it always returns the whole line.
import re
from inspect import getsourcelines
def f(f1, f2_is_not_used_now):
lines = getsourcelines(f1)[0]
sets = re.findall('(?={)(.+?)(?<=})', lines[0])
print(sets)
f(lambda x: {x ** 2}, lambda y: {y ** 3})
# output: ['{x ** 2}', '{y ** 3}']
How can I get just the code (i.e., only the source code of the set of f1 in this contrived example) related to the inspected function object?
There is no fool-proof way to find the definition of a function object in the source at runtime since function objects exist as bytecodes at runtime, and trying to pinpoint the exact position in the source code in which the function is defined with string parsing would always come with caveats and would break under certain circumstances.
One of the more robust approaches would be to convert the bytecodes of the function object to Python code using a decompiler such as the uncompyle6 package:
from uncompyle6.main import decompile
from io import StringIO
def f(f1, f2_is_not_used_now):
out = StringIO()
decompile(bytecode_version=None, co=f1.__code__, out=out)
print(out.getvalue())
f(lambda x: {x ** 2}, lambda y: {y ** 3})
This outputs (sans the comments):
return {
x ** 2}
which isn't exactly the original source code that defines the function with a lambda call, but would give you its equivalent function converted from the bytecodes.
Demo: https://replit.com/#blhsing/PoisedGleamingField

Strange ctypes behaviour on python callable wrapping c callable with c_char_p argtype

I'm observing a strange ctypes related behaviour in the following test program:
import ctypes as ct
def _pyfunc(a_c_string):
print(type(a_c_string))
a_c_string.value = b"87654321"
return -123
my_str_buf = ct.create_string_buffer(b"test1234")
print(type(my_str_buf))
my_str_buf[3] = b'*'
print(my_str_buf.value)
my_str_buf.value = b"4321test"
print(my_str_buf.value)
signature = ct.CFUNCTYPE(ct.c_int, ct.c_char_p)
pyfunc = signature(_pyfunc)
pyfunc(my_str_buf)
print(my_str_buf.value)
The example wraps a python c callable in a python function via the ctypes api.
The goal is to pass the python function a pointer to a c string let it modify it's contents (providing a fake value) and then return to the caller.
I started by the creation of a mutable string buffer via the ctypes function create_string_buffer.
As can be seen from the example, the string buffer is indeed mutable.
After that i create a c function prototype using ctypes.CFUNCTYPE(ct.c_int, ct.c_char_p) and then instantiate that prototype with my python function which should be called using the same signature. Finally i call the python function with my mutable string buffer.
What irritates me is that the argument passed to that function shape shifts from type of <class 'ctypes.c_char_Array_9'> to <class 'bytes'> when the function is called. Unfortunately, the original mutable datatype turned into a completely useless non mutable bytes object.
Is this a ctypes bug? Python Version is 3.6.6.
Here is the output:
<class 'ctypes.c_char_Array_9'>
b'tes*1234'
b'4321test'
<class 'bytes'>
Traceback (most recent call last):
File "_ctypes/callbacks.c", line 234, in 'calling callback function'
File "C:/Users/andree/source/Python_Tests/ctypes_cchar_prototype.py", line 5, in _pyfunc
a_c_string.value = b"87654321"
AttributeError: 'bytes' object has no attribute 'value'
b'4321test'
Expected output:
<class 'ctypes.c_char_Array_9'>
b'tes*1234'
b'4321test'
<class 'ctypes.c_char_Array_9'>
b'87654321'
ctypes.c_char_p is automatically converted to Python bytes. If you don't want the behavior, use either:
ctypes.POINTER(ctypes.c_char))
class PCHAR(ctypes.c_char_p): pass (derivations suppress the behavior)
Note that an LP_c_char doesn't have a .value property, so I had to directly dereference the pointer to affect change in the value.
Also, be careful not to exceed the length of the mutable buffer passed in. I added length as an additional parameter.
Example:
import ctypes as ct
#ct.CFUNCTYPE(ct.c_int, ct.POINTER(ct.c_char), ct.c_size_t)
def pyfunc(a_c_string,length):
new_data = b'87654321\x00' # ensure new null termination is present.
if len(new_data) > length: # ensure new data doesn't exceed buffer length
return 0 # fail
for i,c in enumerate(new_data):
a_c_string[i] = c
return 1 # pass
my_str_buf = ct.create_string_buffer(10)
result = pyfunc(my_str_buf,len(my_str_buf))
print(result,my_str_buf.value)
my_str_buf = ct.create_string_buffer(8)
result = pyfunc(my_str_buf,len(my_str_buf))
print(result,my_str_buf.value)
1 b'87654321'
0 b''

Function always missing 1 required positional argument when stored in a list

I want to store functions in a list and then later in a program call those functions from that list with values also stored on that list.
Example:
import random
import time
ranges = 23,24
my_functions_and_values = [[random.randint, ranges], [time.sleep, 2]]
for i in my_functions_and_values:
i[0](i[1])
But it gives me the following error:
TypeError: randint() missing 1 required positional argument: 'b'
You can store the parameters to functions as tuples (so 2 will become (2, )). Then when you call the function do parameter unpacking with *:
import random
import time
ranges = 23,24
my_functions_and_values = [[random.randint, ranges], [time.sleep, (2, )]]
for i in my_functions_and_values:
i[0](*i[1])

Whats the correct way to call and use this class? Also have TypeError: missing 1 required positional argument: 'self'

I'm still learning the various uses for class methods. I have some code that performs linear regression. So I decided to make a general class called LinRegression and use more specific methods that call the class based on the type of linear regression (i.e use one trailing day, or 5 trailing days etc for the regression).
Anyways, here it goes. I feel like I am doing something wrong here with regards to how I defined the class and am calling the class.
This is from the main.py file:
lin_reg = LinRegression(daily_vol_result)
lin_reg.one_day_trailing()
And this is from the linear_regression file (just showing the one day trailing case):
class LinRegression:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression as lr
from sklearn.metrics import mean_squared_error as mse
from SEplot import se_plot as SE
def __init__(self, daily_vol_result):
"""
:param daily_vol_result: result from def daily_vol_calc
"""
import numpy as np
data = np.asarray(daily_vol_result['Volatility_Daily'])
self.data = data
#classmethod
def one_day_trailing(cls, self):
"""
Compute one day trailing volatility
:return: Mean Squared error, slope: b, and y-int: c
"""
x = self.data[:-1]
y = self.data[1:]
x = x.reshape(len(x), 1)
cls.lr.fit(x, y)
b = cls.lr.coef_[0]
c = cls.lr.intercept_
y_fit1 = b * x + c
MSE1 = cls.mse(y, y_fit1)
print("MSE1 is " + str(MSE1))
print("intercept is " + str(c))
print("slope is " + str(b))
cls.SE(y, y_fit1)
return MSE1, b, c
What I "think" I am doing is that when I call lin_reg, I already have the daily_vol_result passed, then lin_reg.one_day_trailing() should just execute the one_day_trailing def using the self defined in init.
However, I get TypeError: one_day_trailing() missing 1 required positional argument: 'self'. Some other info, the variable, daily_vol_result is a DataFrame and I convert to np array to do the linear regression with sklearn.
Also, when I tried messing around with the code to work, I had an additional issue where the line: lr.fit(x, y) gave me a type error with no positional arg for y. I checked the existence and length of y to see if it matched x and it checks out. I am pretty confused as to how I was only passing one arg.
Your ideas and advice are welcome, thanks!
The thing is, you are using wrong position for self in method one_day_trailing(cls, self). You have specified self at second position in method definition.
If not passed anything and executing the method simply as you did in the 2nd line of code:
lin_reg.one_day_trailing()
the class object self will be passed as first argument, so self is passed in cls argument. And thus, the self argument in one_day_trailing() remains unused.
Interchange the arguments in def like this:-
def one_day_trailing(self, cls):
will be better. But then you need to pass the cls object, whatever it is.
See the following questions to know more:
missing 1 required positional argument:'self'
TypeError: attack() missing 1 required positional argument: 'self'
I found out that the linear regression package was acting like a class and so lr.fit(self, x, y) was what it wanted as an input. I first instantiated the class as:
A = lr(), then A.fit(x,y).
I had this line in my main file:
ASDF = LinRegression.one_day_trailing(daily_vol_result)
I also figured out a more general way to produce these functions. I did not end up needing to use #classmethod or #staticmethod

Does numpy.bincount support numpy.float128 type weights?

Here is a sample code using numpy.bincount
import numpy as np
a = np.array([1.0, 2.0, 3.0], dtype=np.float128)
b = np.array([1, 2, 0], dtype=np.int)
c = np.bincount(b, weights=a)
If run it, I get the following error report:
----> 1 c = np.bincount(b, weights=a)
TypeError: Cannot cast array data from dtype('float128') to dtype('float64') according to the rule 'safe'
Is it a bug of np.bincount? Does there exist any similar function which I can use to work with numpy.float128 type weights?
I wouldn't necessarily call it a bug, but it's not supported. The bincount() function is implemented here. As you can see the weights parameter is cast directly to a double array:
if (!(wts = PyArray_ContiguousFromAny(weight, PyArray_DOUBLE, 1, 1))) {
goto fail;
}
Therefore, it's not possible to pass a np.float128 object to bincount.
Of course you can always cast it to a np.float64 object as suggested in the comments if the extra precision isn't required.

Resources