How to detect and convert a no-iterable parameter to iterable one - python-3.x

I have a Python function which accepts vector like parameter, but I would like that if somebody calls the function with a no iterable parameter, the function accepts and treat it like an one-element vector.
For example, a function which returns the size of a vector:
def longitud(v):
return len(v)
y = [1,2]
print(longitud(y)) # it will return 2, OK
x = 1
print(longitud(x)) # ERROR
It will produce an error because x is no iterable. I would like that longitud function could accept both parameters without problems, and in the 2nd case, treat x like an one-element vector. Is there any elegant way to do this?

Do you want this?
def longitud(*v):
return len(v)
y = [1,2]
print(longitud(*y)) # it will return 2, OK
x = 1
print(longitud(x)) # No ERROR
Alternative(check if the parameter is iterable or not if not then return 1) -
from collections.abc import Iterable
def longitud(v):
if isinstance(v, Iterable):
return len(v)
return 1
y = [1,2]
print(longitud(y)) # it will return 2, OK
x = 1
print(longitud(x)) # NO ERROR

Related

How to use scipy's fminbound function when function returns a dictionary

I have simple function for which I want to calculate the minimum value
from scipy import optimize
def f(x):
return x**2
optimize.fminbound(f, -1, 2)
This works fine. Now I modify above function which now returns a dictionary
def f1(x):
y = x ** 2
return_list = {}
return_list['x'] = x
return_list['y'] = y
return return_list
While it is returning multiple objects x and y, I want to apply fminbound only on y, the other object x is just for informational purpose for other use of this function.
How can I use fminbound for this setup?
You need a simple wrapper function that extracts y:
def f1(x):
y = x ** 2
return_list = {}
return_list['x'] = x
return_list['y'] = y
return return_list
optimize.fminbound(lambda x: f1(x)['y'], -1, 2)

Tuple of Tensors

I recently asked one part of this question. I am building a chatbot, and there is a function that makes the problems. The function is given below:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = Variable(torch.stack(torch.Tensor(inputs),1).cuda())
else:
batch_inputs = Variable(torch.stack(torch.Tensor(inputs),1))
return batch_inputs, lengths_inputs
But when I try to run the chatbot code , it gives me this error:
stack(): argument 'tensors' (position 1) must be tuple of Tensors, not tensor
For that reason, I fixed the function like this:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = torch.stack(inputs, 1).cuda()
else:
batch_inputs = torch.stack(inputs, 1)
return batch_inputs, lengths_inputs
But it still gives me error, and the error is like this:
TypeError: expected Tensor as element 0 in argument 0, but got list
What should I do now in this situation?
Since the vec and length are both integers, you can use torch.tensor directly:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = torch.tensor(inputs, device='cuda')
else:
batch_inputs = torch.tensor(inputs)
return batch_inputs, lengths_inputs

How to use ray parallelism within a class in python?

I want to use the ray task method rather than the ray actor method to parallelise a method within a class. The reason being the latter seems to need to change how a class is instantiated (as shown here). A toy code example is below, as well as the error
import numpy as np
import ray
class MyClass(object):
def __init__(self):
ray.init(num_cpus=4)
#ray.remote
def func(self, x, y):
return x * y
def my_func(self):
a = [1, 2, 3]
b = np.random.normal(0, 1, 10000)
result = []
# we wish to parallelise over the array `a`
for sub_array in np.array_split(a, 3):
result.append(self.func.remote(sub_array, b))
return result
mc = MyClass()
mc.my_func()
>>> TypeError: missing a required argument: 'y'
The error arises because ray does not seem to be "aware" of the class, and so it expects an argument self.
The code works fine if we do not use classes:
#ray.remote
def func(x, y):
return x * y
def my_func():
a = [1, 2, 3, 4]
b = np.random.normal(0, 1, 10000)
result = []
# we wish to parallelise over the list `a`
# split `a` and send each chunk to a different processor
for sub_array in np.array_split(a, 4):
result.append(func.remote(sub_array, b))
return result
res = my_func()
ray.get(res)
>>> [array([-0.41929678, -0.83227786, -2.69814232, ..., -0.67379119,
-0.79057845, -0.06862196]),
array([-0.83859356, -1.66455572, -5.39628463, ..., -1.34758239,
-1.5811569 , -0.13724391]),
array([-1.25789034, -2.49683358, -8.09442695, ..., -2.02137358,
-2.37173535, -0.20586587]),
array([ -1.67718712, -3.32911144, -10.79256927, ..., -2.69516478,
-3.1623138 , -0.27448782])]```
We see the output is a list of 4 arrays, as expected. How can I get MyClass to work with parallelism using ray?
a few tips:
It's generally recommended that you only use the ray.remote decorator on functions or classes in python (not bound methods).
You should be very very careful about calling ray.init inside the constructor of a function, since ray.init is not idempotent (which means your program will fail if you instantiate multiple instances of MyClass). Instead, you should make sure ray.init is only run once in your program.
I think there's 2 ways of achieving the results you're going for with Ray here.
You could move func out of the class, so it becomes a function instead of a bound method. Note that in this approach MyClass will be serialized, which means that changes that func makes to MyClass will not be reflected anywhere outside the function. In your simplified example, this doesn't appear to be a problem. This approach makes it easiest to achieve the most parallelism.
#ray.remote
def func(obj, x, y):
return x * y
class MyClass(object):
def my_func(self):
...
# we wish to parallelise over the array `a`
for sub_array in np.array_split(a, 3):
result.append(func.remote(self, sub_array, b))
return result
The other approach you could consider is to use async actors. In this approach, the ray actor will handle concurrency via asyncio, but this comes with the limitations of asyncio.
#ray.remote(num_cpus=4)
class MyClass(object):
async def func(self, x, y):
return x * y
def my_func(self):
a = [1, 2, 3]
b = np.random.normal(0, 1, 10000)
result = []
# we wish to parallelise over the array `a`
for sub_array in np.array_split(a, 3):
result.append(self.func.remote(sub_array, b))
return result
Please see this code:
#ray.remote
class Prime:
# Constructor
def __init__(self,number) :
self.num = number
def SumPrime(self,num) :
tot = 0
for i in range(3,num):
c = 0
for j in range(2, int(i**0.5)+1):
if i%j == 0:
c = c + 1
if c == 0:
tot = tot + i
return tot
num = 1000000
start = time.time()
# make an object of Check class
prime = Prime.remote(num)
print("duration =", time.time() - start, "\nsum_prime = ", ray.get(prime.SumPrime.remote(num)))

Trying to use a function that returns three values as the input for a function that takes 3 values

I have a function that does a few things, but ultimately returns three lists.
I have another function, a set method in a class, which takes three inputs.
When I try to use the first function as the argument for the set function, it complains that there's not enough inputs, despite the fact its returning the right amount. Is there a way around this? Should I just declare some temporary local variables to do this?
a simplified version of my code
class hello(object):
a, b, c = 0, 0, 0
def __init__(self, name):
self.name = name
def setThings(one, two, three):
self.a = one
self.b = two
self.c = three
def someStuff(x, y, z):
newX = x * 1337
newY = y * 420
newZ = z * 69
return newX, newY, newZ
first = int(input("first"))
second = int(input("second"))
third = int(input("third"))
kenzi = hello(input("name pls"))
kenzi.setThings(someStuff(first, second, third))
Add a asterix before the function when calling it as a argument.
kenzi.setThings(*someStuff(first, second, third))

Theano scan: how do I have a tuple output as input into next step?

I want to do this sort of loop in Theano:
def add_multiply(a,b, k):
return a+b+k, a*b*k, k
x=1
y=2
k=1
tuples = []
for i in range(5):
x,y,k = add_multiply(x,y,k)
tuples.append((x,y,k))
However, when I do
x0 = T.dvector('x0')
i = T.iscalar('i')
results,updates=th.scan(fn=add_multiply,outputs_info=[{'initial':x0,'taps':[-1]}],n_steps=i)
I get TypeError: add_multiply() takes exactly 3 arguments (1 given). If I change it so that the function takes a single tuple instead, I get ValueError: length not known
In particular, I eventually want to differentiate the entire result with respect to k.
The first error is because your add_multiply function takes 3 arguments but, by having only one element in the outputs_info list, you're only providing a single argument. It's not clear if you intended the x0 vector to be the initial value for just a or were expecting it to be spread over a, b, and k. The latter isn't supported by Theano and, in general, tuples are not supported by Theano. In Theano, everything needs to be a tensor (e.g. scalars are just special types of tensors with zero dimensions).
You can achieve a replica of the Python implementation in Theano as follows.
import theano
import theano.tensor as tt
def add_multiply(a, b, k):
return a + b + k, a * b * k
def python_main():
x = 1
y = 2
k = 1
tuples = []
for i in range(5):
x, y = add_multiply(x, y, k)
tuples.append((x, y, k))
return tuples
def theano_main():
x = tt.constant(1, dtype='uint32')
y = tt.constant(2, dtype='uint32')
k = tt.scalar(dtype='uint32')
outputs, _ = theano.scan(add_multiply, outputs_info=[x, y], non_sequences=[k], n_steps=5)
g = theano.grad(tt.sum(outputs), k)
f = theano.function(inputs=[k], outputs=outputs + [g])
tuples = []
xvs, yvs, _ = f(1)
for xv, yv in zip(xvs, yvs):
tuples.append((xv, yv, 1))
return tuples
print 'Python:', python_main()
print 'Theano:', theano_main()
Note that in the Theano version, all the tuple handling happens outside Theano; Python has to convert from the three tensors returned by the Theano function into a list of tuples.
Update:
It's unclear what "the entire result" should refer to but the code has been updated to show how you might differentiate with respect to k. Note that in Theano the symbolic differentiation only works with scalar expressions, but can differentiate with respect to multi-dimensional tensors.
In this update the add_multiply method no longer returns k since that is constant. For similar reasons, the Theano version now accepts k as a non_sequence.

Resources