Given a numpy array a, is there any alternative to
a[-1]
to get the last element?
The idea is to have some aggregating numpy method as
np.last(a)
that could be passed to a function to operate on a numpy array:
import numpy as np
def operate_on_array(a: np.array, np_method_name: str):
method = getattr(np, np_method_name)
return method(a)
This would work for methods such as np.mean, np.sum but I have not been able to find if there is some numpy method name that would return the last or first element of the array.
What about a lambda?
lambda x: x[-1]
suppose we have two dictionaries
d1={"a":1,"b":2}
d2={"x":4,"y":2}
Now I want to compare only the values of both dictionaries(no matters what keys they are having).
How can I do this operation, pls suggest...
You can transform the values to numpy array and then broadcast the == operator as follows:
import numpy as np
d1={"a":1,"b":2}
d2={"x":4,"y":2}
np.array(list(d1.values())) == np.array(list(d2.values()))
output:
array([False, True])
I am looking for a single vector with values [(0:400) (-400:-1)]
Can anyone help me on how to write this in python.
Using Numpy .array to create the vector and .arange to generate the range:
import numpy as np
arr = np.array([[np.arange(401)], [np.arange(-400, 0)]], dtype=object)
I'm using a camera to store raw data in a numpy array, but I don't know What does mean a colon before a number in numpy array?
import numpy as np
import picamera
camera = picamera.PiCamera()
camera.resolution = (128, 112)
data = np.empty((128, 112, 3), dtype=np.uint8)
camera.capture(data, 'rgb')
data = data[:128, :112]
numpy array indexing is explained in the doc.
this example shows what is selected:
import numpy as np
data = np.arange(64).reshape(8, 8)
print(data)
data = data[:3, :5]
print(data)
the result will be the first 5 elements of the first 3 rows of the array.
as in standard python lst[:3] means everything up to the third element (i.e. the element with index < 3). in numpy you can do the same for every dimension with the syntax given in your question.
As a part of my project, I am trying to implement parallelized normalization operation on a bulk of matrice by using a map function with the matrix to processed and vectors encapsulating min and max value of each dimension as input variables. The codes are listed below:
import numpy as np
from functools import partial
def cf(A,MinValues,MaxValues):
print("Result is ##################",A=(A-MinValues)/(MaxValues-MinValues))
A=(A-MinValues)/(MaxValues-MinValues)
return A
if __name__=='__main__':
AMatrix=np.matrix([[1,5,9],[4,8,3],[7,2,6]])
MinMatrix=np.matrix([1,2,3])
MaxMatrix=np.matrix([7,8,9])
........
sc.parallelize(AMatrix).map(partial(cf,MinValues=MinMatrix,MaxValues=MaxMatrix)).collect()
After I run the code above, it will display correct answers on the terminal via print operation during the processing, however it will always display [[None],[None],[None]] by the end, which means that (I guess) after the map() operation the spark can only collect a list include [None] elements.
Can guru here possibly tell me what happened here please? what is the right way to implement the function?
Great in advance
I run this code (python 2.7):
import numpy as np
from functools import partial
def cf(A,MinValues,MaxValues):
#print "Result is " + str((A-MinValues)/(MaxValues-MinValues))
A=(A-MinValues)/(MaxValues-MinValues)
return A
AMatrix=np.matrix([[1,5,9],[4,8,3],[7,2,6]])
MinMatrix=np.matrix([1,2,3])
MaxMatrix=np.matrix([7,8,9])
print sc.parallelize(AMatrix).map(partial(cf,MinValues=MinMatrix,MaxValues=MaxMatrix)).collect()
And this is the result:
[matrix([[0, 0, 1]]), matrix([[0, 1, 0]]), matrix([[1, 0, 0]])]
I can't see the problem..