how to get a kind of "maximum" in a matrix, efficiently - python-3.x

I have the following problem: I have a matrix opened with pandas module, where each cell has a number between -1 and 1. What I wanted to find is the maximum "posible" value in a row that is also not the maximum value in another row.
If for example 2 rows has their maximum value at the same column, I compare both values and take the bigger one, then for the row that has its maximum value smaller that the other row, I took the second maximum value (and do the same analysis again and again).
To explain myself better consider my code
import pandas as pd
matrix = pd.read_csv("matrix.csv")
# this matrix has an id (or name) for each column
# ... and the firt column has the id of each row
results = pd.DataFrame(np.empty((len(matrix),3),dtype=pd.Timestamp),columns=['id1','id2','max_pos'])
l = len(matrix.col[[0]]) # number of columns
while next = 1:
next = 0
for i in range(0, len(matrix)):
max_column = str(0)
for j in range(1, l): # 1 because the first column is an id
if matrix[max_column][i] < matrix[str(j)][i]:
max_column = str(j)
results['id1'][i] = str(i) # I coul put here also matrix['0'][i]
results['id2'][i] = max_column
results['max_pos'][i] = matrix[max_column][i]
for i in range(0, len(results)): #now I will check if two or more rows have the same max column
for ii in range(0, len(results)):
# if two id1 has their max in the same column, I keep it with the biggest
# ... max value and chage the other to "-1" to iterate again
if (results['id2'][i] == results['id2'][ii]) and (results['max_pos'][i] < results['max_pos'][ii]):
matrix[results['id2'][i]][i] = -1
next = 1
Putting an example:
#consider
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[4, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 4 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#at the first iterarion I will have the following result
0 b 4 # this means that the row 0 has its maximum at column 'b' and its value is 4
1 b 5
2 a 5
3 c 2
#the problem is that column b is the maximum of row 0 and 1, but I know that the maximum of row 1 is bigger than row 0, so I take the second maximum of row 0, then:
0 c 3
1 b 5
2 a 5
3 c 2
#now I solved the problem for row 0 and 1, but I have that the column c is the maximum of row 0 and 3, so I compare them and take the second maximum in row 3
0 c 3
1 b 5
2 a 5
3 d 1
#now I'm done. In the case that two rows have the same column as maximum and also the same number, nothing happens and I keep with that values.
#what if the matrix would be
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[5, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 5 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#then, at the first itetarion the result will be:
0 b 5
1 b 5
2 a 5
3 c 2
#then, given that the max value of row 0 and 1 is at the same column, I should compare the maximum values
# ... but in this case the values are the same (both are 5), this would be the end of iterating
# ... because I can't choose between row 0 and 1 and the other rows have their maximum at different columns...
This code works perfect to me if I have a matrix of 100x100 for example. But, if the matrix size goes to 50,000x50,000 the code takes to much time in finish it. I now that my code could be the most inneficient way to do it, but I don't know how to deal with this.
I have been reading about threads in python that could help but it doesn't help if I put 50,000 threads because my computer doesn't use more CPU. I also tried to use some functions as .max() but I'm not able to get column of the max an compare it with the other max ...
If anyone could help me of give me a piece of advice to make this more efficient I would be very grateful.

Going to need more information on this. What are you trying to accomplish here?
This will help you get some of the way, but in order to fully achieve what you're doing I need more context.
We'll import numpy, random, and Counter from collections:
import numpy as np
import random
from collections import Counter
We'll create a random 50k x 50k matrix of numbers between -10M and +10M
mat = np.random.randint(-10000000,10000000,(50000,50000))
Now to get the maximums for each row we can just do the following list comprehension:
maximums = [max(mat[x,:]) for x in range(len(mat))]
Now we want to find out which ones are not maximums in any other rows. We can use Counter on our maximums list to find out how many of each there are. Counter returns a counter object that is like a dictionary with the maximum as the key, and the # of times it appears as the value.
We then do dictionary comprehension where the value is == to 1. That will give us the maximums that only show up once. we use the .keys() function to grab the numbers themselves, and then turn it into a list.
c = Counter(maximums)
{9999117: 15,
9998584: 2,
9998352: 2,
9999226: 22,
9999697: 59,
9999534: 32,
9998775: 8,
9999288: 18,
9998956: 9,
9998119: 1,
...}
k = list( {x: c[x] for x in c if c[x] == 1}.keys() )
[9998253,
9998139,
9998091,
9997788,
9998166,
9998552,
9997711,
9998230,
9998000,
...]
Lastly we can do the following list comprehension to iterate through the original maximums list to get the indicies of where these rows are.
indices = [i for i, x in enumerate(maximums) if x in k]
Depending on what else you're looking to do we can go from here.
Its not the speediest program but finding the maximums, the counter, and the indicies takes 182 seconds on a 50,000 by 50,000 matrix that is already loaded.

Related

Create a new column by extracting the smallest tuple from a data frame column

I have a dataframe with a column that contains tuples. I would like to create a new column that extracts the smallest tuple from the tuple column.
What I have tried so far
mydataframe['min_values'] = mydataframe['tuple_column'].apply(lambda x: min(x))
This above approach seems to work when I have at least 2 tuples, but it fails when I only have one tuple e.g. 5 in the example below. Could you guys please suggest a method that would help me accomplish this task in a better manner?
Example and desired result
Tuple Column
New Column
(1,2,3,5)
1
(10,11)
10
(5)
5
Thanks
(5) is not a tuple, this is 5. Use numpy.min that handles scalar values as input:
import numpy as np
df['New Column'] = df['Tuple Column'].apply(np.min)
Output:
Tuple Column New Column
0 (1, 2, 3, 5) 1
1 (10, 11) 10
2 5 5
Here is a way using map()
df['Tuple Column'].map(lambda x: min(x) if isinstance(x,tuple) else x)
Output:
0 1
1 3
2 5
df1.applymap(lambda x:pd.Series(eval(x)).min())
Output:
0 1
1 3
2 5

Counting the number of one in a list

I already know the number of columns which is 3. The program asks the user to enter the number of rows
After that, the program tries to see for each row if the number of one entered by the user for each line is greater or equal to 2 and do the sum of the number of rows that the one is greater or equal to one
like if we have entered 3 like a number of rows. we have to enter the value that is either 0 or one for each row (each row has three-element) let's join these values
1 0 1
1 1 1
0 1 0
the program will print 2(Because the row 1 and row 2 have the number of one greater than one each)
Here is the code that I wrote. But I'm unable to count the number of one for each row
tab=[]
ligne=int(input('Enter rows : '))
column=1
for i in range(ligne):
a=[]
for k in range(column):
new=(input())
so=new.split()
a.append(new)
print()
tab.append(a)
for i in range(ligne):
for k in range(column):
for c in range(len(so)):
so[i][k] = int(so[i][k])
count = 0
for row in matrix:
if row.count(1) > 1:
count += 1
Input:
# I hardcoded the matrix for demo.
matrix = [
[1, 0, 1],
[1, 1, 1],
[0, 1, 0],
]
Output:
>>> print(count)
>>> 2

python3.7 & pandas - use column value in row as lookup value to return different column value

I've got a tricky situation - tricky for me since I'm really new to python. I've got a dataframe in pandas and I need to logic my way through building a new column that will be used later in a data match from a difference source. Basically, the picture tells what I can't figure out.
For any of the LOW labels I need to retrieve their MID_LEVEL label and copy it to a new column. The DESIRED OUTPUT column is what I need to create.
You can see that the LABEL_PATH is formatted in a way that I can use the first 9 digits as a "lookup" to find the corresponding LABEL, but I can't figure out how to achieve that. As an example, for any row that the LABEL_PATH starts with "0.02.0004" the desired output needs to be "MID_LEVEL1".
This dataset has around 25k rows, so wanted to avoid row iteration as well.
Any help would be greatly appreciated!
Chosing a similar example as you did:
df = pd.DataFrame({"a":["1","1.1","1.1.1","1.1.2","2"],"b":range(5)})
df["c"] = np.nan
mask = df.a.apply(lambda x: len(x.split(".")) < 3)
df.loc[mask,"c"] = df.b[mask]
df.c.fillna(method="ffill", inplace=True)
Most of the magic takes place in the line where mask is defined, but it's not that difficult: if the value in a gets split into less than 3 parts (i.e., has at most one dot), mark it as True, otherwise not.
Use that mask to copy over the values, and then fill unspecified values with valid values from above.
I am using this data for comparison :
test_dict = {"label_path": [1, 2, 3, 4, 5, 6], "label": ["low1", "low2", "mid1", "mid2", "high1", "high2"], "desired_output": ["mid1", "mid2", "mid1", "mid2", "high1", "high2"]}
df = pd.DataFrame(test_dict)
Which gives :
label_path label desired_output
0 1 low1 mid1
1 2 low2 mid2
2 3 mid1 mid1
3 4 mid2 mid2
4 5 high1 high1
5 6 high2 high2
With a bit ogf logic and a merge :
desired_label_df = df.drop_duplicates("desired_output", keep="last")
desired_label_df = desired_label_df[["label_path", "desired_output"]]
desired_label_df.columns = ["desired_label_path", "desired_output"]
df = df.merge(desired_label_df, on="desired_output", how="left")
Gives us :
label_path label desired_output desired_label_path
0 1 low1 mid1 3
1 2 low2 mid2 4
2 3 mid1 mid1 3
3 4 mid2 mid2 4
4 5 high1 high1 5
5 6 high2 high2 6
Edit: if you want to create the desired_output column, just do the following :
df["desired_output"] = df["label"].apply(lambda x: x.replace("low", "mid"))

how to get value of column2 when column 1 is greater 3 and check this value belong to which Bin

I have one dataframe with two columns , A and B . first i need to make empty bins with step 1 from 1 to 11 , (1,2),(2,3)....(10,11). then check from original dataframe if column B value greater than 3 then get value of column 'A' 2 rows before when column B is greater than 3.
Here is example dataframe :
df=pd.DataFrame({'A':[1,8.5,5.2,7,8,9,0,4,5,6],'B':[1,2,2,2,3.1,3.2,3,2,1,2]})
Required output 1:
df_out1=pd.DataFrame({'Value_A':[8.5,5.2]})
Required_output_2:
df_output2:
Bins count
(1 2) 0
(2,3) 0
(3,4) 0
(4,5) 0
(5,6) 1
(6,7) 0
(7,8) 0
(8,9) 1
(9,10) 0
(10,11) 0
You can index on a shifted series to get the two rows before 'A' satisfies some condition like
out1 = df['A'].shift(3)[df['B'] > 3]
The thing you want to do with the bins is known as a histogram. You can easily do this with numpy like
count, bin_edges = np.histogram(out1, bins=[i for i in range(1, 12)])
out2 = pd.DataFrame({'bin_lo': bin_edges[:-1], 'bin_hi': bin_edges[1:], 'count': count})
Here 'bin_lo' and 'bin_hi' are the lower and upper bounds of the bins.

Pandas - Least frequent value in column

I have a Pandas series of integers, 'win'. I want the values most_common and least_common to be the most and least frequent values in the column. for example, with the following numbers, I would want most_common to be 2 and least_common to be 1. If it is a tie (either way) then this can be broken arbitrarily.
0 1 2 2 2 0 0 2 2 0
I can find most_common using the following code:
win.mode()[0]
How can I find the least common? I tried the following code, but it did not work, and in any case I was not sure if this was the best way to go about this:
lowest =valid_loss.value_counts().tail(1)[0]
I think need last value of index for lowest value and first index for top value:
valid_loss = pd.Series([0, 1, 2, 2, 2, 0, 0, 2, 2, 0])
s = valid_loss.value_counts()
print (s)
2 5
0 4
1 1
dtype: int64
highest = s.index[0]
print (highest)
2
lowest = s.index[-1]
print (lowest)
1

Resources