want to sort np.ndarray indexes of an array such as
[[.5,.7, .9], [.6, .0, .8]]
result would look like this
[[1,1],[0,1],[1,0],[0,1],[1,2],[0,3]]
applying those indexes will get correct sorting order and at same time can be applied to other structures that match the data.
I tried np.argsort, but that doesn't give indexes for ndarray
You can use np.argsort on the flat array and then use np.divmod to get the indexes of your previous shape.
Edit: np.unravel_index is the divmod alternative for higher dimensions, see https://numpy.org/doc/stable/reference/generated/numpy.unravel_index.html
Related
I have an excel sheet with some column data that I would like to use for some matrix multiplications using MMULT-function. For that purpose I need to reshape the column data first. I would like to do the reshaping using a dynamic array function since that could then feed directly into the MMULT function without having to actually display the reshaped matrix in the sheet (i.e. keeping only the column with the input data visible for the user). I am aware of ideas such as the one outlined here http://www.cpearson.com/excel/VectorToMatrix.aspx however however as far as I can see that requires having the reshaped data displayed in the sheet which I do not want. An alternative could be to enter the arrays directly in the formula using curly brackets, however as far as I can see this notation does not allow cell-references, i.e. something like MMULT({A1,A2,A3;A4,A5,A6},{A7,A8;A9,A10;A11,A12}) is not allowed. Any ideas for solving this issue?
An example is shown below, basically I have the column-data in my sheet, but do not want to repeat the data (as reshaped data), however, I would still like to be able to do display the square of the reshaped matrix.
Reshaped data and matrix multiplication:
For reshaping a 9x1 array into a 3x3 array:
INDEX(B3:B11,SEQUENCE(ROWS(B3:B11)/3,3))
I have two colums in pandas: df.lat and df.lon.
Both have a length of 3897 and 556 NaN values.
My goal is to combine both columns and make a dict out of them.
I use the code:
dict(zip(df.lat,df.lon))
This creates a dict, but with one element less than my original columns.
I used len()to confirm this. I can not figure out why the dict has one element
less than my columns, when both columns have the same length.
Another problem is that the dict has only raw values, but not the keys "lat" respectively "lon".
Maybe someone here has an idea?
You may have a different length if there are repeated values in df.lat as you can't have duplicate keys in the dictionary and so these values would be dropped.
A more flexible approach may be to use the df.to_dict() native method in pandas. In this example the orientation you want is probably 'records'. Full code:
df[['lat', 'lon']].to_dict('records')
I have an INDArray with shape {7,2,3}. I would like to increase one or more of the dimensions {8,3,4} or {7,3,3}, etc and insert the values into the resized array. I understand that the same array cannot be resized to increase length so I intend to create a bigger array with the same rank and inserting the values into it, but even different Nd4j.put methods expect a scalar only for insertion into the new array and for Nd4j.copy to work the shape of the two arrays need to be the same. How can I go about inserting a smaller array into a bigger array, where the indices for any given value would be the same for both and the newer one only allows me to bring in new indices for the array?
Easiest way is probably getting a subarray out of your big array, and calling subarray_of_big_array.assign(small_array)
I would like to save different numpy arrays with np.savetxt('xx.csv'). The np.arrays are (m,n) and I would like to add before the first column a list of index and above each line a list of header. Both lists consist of strings.
When I do multiple savetxt it erased the previous values.
I considered creating a huge matrix by merging every numpy arrays but doubt about the optimality.
Thanks for your help
I need to check if two dicts are equal. If the values rounded off to 6 decimal places are equal, then the program must say that they are equal. For e.g. the following two dicts are equal
{'A': 0.00025037208557341116}
and
{'A': 0.000250372085573415}
Can anyone suggest me how to do this? My dictionaries are big (more than 8000 entries) and I need to access this values multiple times to do other calculations.
Test each key as you produce the second dict iteratively. Looking up a key/value pair from the dict you are comparing with is cheap (linear cost), and round the values as you find them.
You are essentially performing a set difference to test for equality of the keys, which requires at least a full loop over the smallest of the sets. If you already need to loop to generate one of the dicts, you are at an advantage as that'll give you the shortest route to determining inequality soonest.
To test for two floats being the same within a set tolerance, see What is the best way to compare floats for almost-equality in Python?.