how to change values in a df specifying by index contain in multiple lists, and each list for one column - python-3.x

I have a list where I have all the index of values to be replaced. I have to change them in 8 diferent columns with 8 diferent lists. The replacement could be a simple string.
How can I do it?
I have more than 20 diferent columns in this df
Eg:
list1 = [0,1,2]
list2 =[2,4]
list8 = ...
sustitution = 'no data'
Column A
Column B
marcos
peter
Julila
mike
Fran
Ramon
Pedri
Gavi
Olmo
Torres
OUTPUT:
| Column A | Column B |
| -------- | -------- |
| no data | peter |
| no data | mike |
| no data | no data |
| Pedri | Gavi |
| Olmo | no data |`

Use DataFrame.loc with zipped lists and columns names:
list1 = [0,1,2]
list2 =[2,4]
L = [list1,list2]
cols = ['Column A','Column B']
sustitution = 'no data'
for c, i in zip(cols, L):
df.loc[i, c] = sustitution
print (df)
Column A Column B
0 no data peter
1 no data mike
2 no data no data
3 Pedri Gavi
4 Olmo no data

You can use the underlying numpy array:
list1 = [0,1,2]
list2 = [2,4]
lists = [list1, list2]
col = np.repeat(np.arange(len(lists)), list(map(len, lists)))
# array([0, 0, 0, 1, 1])
row = np.concatenate(lists)
# array([0, 1, 2, 2, 4])
df.values[row, col] = 'no data'
Output:
Column A Column B
0 no data peter
1 no data mike
2 no data no data
3 Pedri Gavi
4 Olmo no data

Related

Pandas find max column, subtract from another column and replace the value

I have a df like this:
A | B | C | D
14 | 5 | 10 | 5
4 | 7 | 15 | 6
100 | 220 | 6 | 7
For each row in column A,B,C, I want the find the max value and from it subtract column D and replace it.
Expected result:
A | B | C | D
9 | 5 | 10 | 5
4 | 7 | 9 | 6
100 | 213 | 6 | 7
So for the first row, it would select 14(the max out of 14,5,10), subtract column D from it (14-5 =9) and replace the result(replace initial value 14 with 9)
I know how to find the max value of A,B,C and from it subctract D, but I am stucked on the replacing part.
I tought on putting the result in another column called E, and then find again the max of A,B,C and replace with column E, but that would make no sense since I would be attempting to assign a value to a function call. Is there any other option to do this?
#Exmaple df
list_columns = ['A', 'B', 'C','D']
list_data = [ [14, 5, 10,5],[4, 7, 15,6],[100, 220, 6,7]]
df= pd.DataFrame(columns=list_columns, data=list_data)
#Calculate the max and subctract
df['e'] = df[['A', 'B']].max(axis=1) - df['D']
#To replace, maybe something like this. But this line makes no sense since it's backwards
df[['A', 'B','C']].max(axis=1) = df['D']
Use DataFrame.mask for replace only maximal value matched by compare all values of filtered columns with maximals:
cols = ['A', 'B', 'C']
s = df[cols].max(axis=1)
df[cols] = df[cols].mask(df[cols].eq(s, axis=0), s - df['D'], axis=0)
print (df)
A B C D
0 9 5 10 5
1 4 7 9 6
2 100 213 6 7

Pandas: With array of col names in a desired column order, select those that exist, NULL those that don't

I have an array of column names I want as my output table in that order e.g. ["A", "B", "C"]
I have an input table that USUALLY contains all of the values in the array but NOT ALWAYS (the raw data is a JSON API response).
I want to select all available columns from the input table, and if a column does not exist, I want it filled with NULLs or NA or whatever, it doesn't really matter.
Let's say my input DataFrame (call it input_table) looks like this:
+-----+--------------+
| A | C |
+-----+--------------+
| 123 | test |
| 456 | another_test |
+-----+--------------+
I want an output dataframe that has columns A, B, C in that order to produce
+-----+------+--------------+
| A | B | C |
+-----+------+--------------+
| 123 | NULL | test |
| 456 | NULL | another_test |
+-----+------+--------------+
I get a keyerror when I do input_table[["A","B","C"]]
I get a NoneType returned when I do input_table.get(["A","B","C"])
I was able to achieve what I want via:
for i in desired_columns_array:
if i not in input_dataframe:
ouput_dataframe[i] = ""
else:
output_dataframe[i] = input_dataframe[i]
But I'm wondering if there's something less verbose?
How do I get a desired output schema to match an input array when one or more columns in the input dataframe may not be present?
Transpose and reindex
df = pd.DataFrame([[123,'test'], [456, 'another test']], columns=list('AC'))
l = list('ACB')
df1 = df.T.reindex(l).T[sorted(l)]
A B C
0 123 NaN test
1 456 NaN another test
DataFrame.reindex over the column axis:
cols = ['A', 'B', 'C']
df.reindex(cols, axis='columns')
A B C
0 123 NaN test
1 456 NaN another_test

How to extract max length row with pandas

I would like to extract row which is max in Dataframe.
In following case, I would like to get id 2 row, because it includes max length 6 in B column bbbbbb.
|id|A |B |
|1 |abc |aaa |
|2 |abb |bbbbbb|
|3 |aadd|cccc |
|4 |aadc|ddddd |
|id|A |B |
|2 |abb |bbbbbb|
Please give me some advice. Thanks.
Let's first create the DataFrame with you example:
import pandas as pd
data = {
"id": {0: 1, 1: 2, 2: 3, 3: 4},
"A ": {0: "abc", 1: "abb", 2: "aadd", 3: "aadc"},
"B": {0: "aaa", 1: "bbbbbb", 2: "cccc", 3: "ddddd"}
}
df = pd.DataFrame(data)
Then you can get the row where B is longer and then retrive that row with:
# Index where B is longest
idx = df["B"].apply(len).idxmax()
# Get that row
df.iloc[idx, :]
Get all columns filled by object (obviously strings) by DataFrame.select_dtypes, get length with max per rows and last filter maximal by boolean indexing for match all rows with maximal lengths:
s = df.select_dtypes(object).apply(lambda x: x.str.len()).max(axis=1)
#if no missing values
#s = df.select_dtypes(object).applymap(len).max(axis=1)
df1 = df[s.eq(s.max())]
print (df1)
id A B
1 2 abb bbbbbb
Another idea for only first match by Series.idxmax and DataFrame.loc, added [] for one row DataFrame:
df1 = df.loc[[df.select_dtypes(object).apply(lambda x: x.str.len()).max(axis=1).idxmax()]]
#if no missing values
#df1 = df.loc[[df.select_dtypes(object).applymap(len).max(axis=1).idxmax()]]
print (df1)
id A B
1 2 abb bbbbbb
First, you can find the maximal length per each row and then the row index with a maximal value:
df.loc[df[['A', 'B']].apply(lambda x: x.str.len().max(), axis=1).idxmax()]

How to label encode a DataFrame column which contains both numbers and strings?

I have this DataFrame column
+-------------------------------------+--+
| df: | |
+-------------------------------------+--+
| Index Ticket* | |
| 0 254326 | |
| 1 CA345 | |
| 3 SA12 | |
| 4 267891 | |
| ' ' | |
| ' ' | |
| ' ' | |
| 700 CA356 | |
+-------------------------------------+--+
It contains two kinds of values. Some are pure numbers and others are strings having letters and numbers.
Many rows have the same letters (CA345, CA675 etc). I would like to group and label the rows with same letters with the same numbers.
Eg. All rows having "CA" labelled as 0, all rows having "SA" labelled as 1.
Remaining rows all have six digit numbers (no letters in them). I would like to label all such rows with the same number (say 2 for example)
1st Approach
Define a custom function, check if the row isinstance(val, str) and contains "SA" or "CA"
def label_ticket(row):
if isinstance(row['Ticket'], str) and 'CA' in row['Ticket']:
return 0
if isinstance(row['Ticket'], str) and 'SA' in row['Ticket']:
return 1
return 2
Apply the custom function to new column df('Label').
df['Label'] = df.apply(label_ticket, axis=1)
print(df)
Ticket Label
0 254326 2
1 CA345 0
2 SA12 1
3 267891 2
700 CA356 0
2nd Approach
Further understanding the situation, it seems you have no idea what instances will come up in df['Ticket']. In this case you can use re.split() to search all string pattern and classify them into category accordingly.
import pandas as pd
import re
df = pd.DataFrame(columns=['Ticket'],
data=[[254326],
['CA345'],
['SA12'],
[267891],
['CA356']])
df['Pattern'] = df['Ticket'].apply(lambda x: ''.join(re.split("[^a-zA-Z]*", str(x))))
df_label = pd.DataFrame(df['Pattern'].unique(), columns=['Pattern']).reset_index(level=0).rename(columns={'index': 'Label'})
df = df.merge(df_label, how='left')
print(df)
Ticket Pattern Label
0 254326 0
1 CA345 CA 1
2 SA12 SA 2
3 267891 0
4 CA356 CA 1
I have not enough knowledge of python but
you may have try pandas.Series.str.extract
and
regular expression
Like:
ptrn=r'(?P<CA>(CA[\d]+))|(?P<SA>(SA[\d]+))|(?P<DIGIT>[\d]{6})'
import pandas as pd
import numpy as np
ls={'tk':[ '254326' , 'CA345', 'SA12' , '267891' , 'CA356' ]}
df = pd.DataFrame(ls)
s=df['tk'].str.extract(ptrn,expand=False)
newDf={0:[x for x in s['CA'] if pd.isnull(x)==False],1:[x for x in s['SA'] if pd.isnull(x)==False],2:[x for x in s['DIGIT'] if pd.isnull(x)==False]}
print(newDf)
out put:
{0: ['CA345', 'CA356'], 1: ['SA12'], 2: ['254326', '267891']}
demo

Populating a pandas dataframe from an odd dictionary

I have a dictionary as follows:
{'header_1': ['body_1', 'body_3', 'body_2'],
'header_2': ['body_6', 'body_4', 'body_5'],
'header_4': ['body_7', 'body_8'],
'header_3': ['body_9'],
'header_9': ['body_10'],
'header_10': []}
I would like to come up with a dataframe like this:
+----+----------+--------+
| ID | header | body |
+----+----------+--------+
| 1 | header_1 | body_1 |
+----+----------+--------+
| 2 | header_1 | body_3 |
+----+----------+--------+
| 3 | header_1 | body_2 |
+----+----------+--------+
| 4 | header_2 | body_6 |
+----+----------+--------+
| 5 | header_2 | body_4 |
+----+----------+--------+
| 6 | header_2 | body_5 |
+----+----------+--------+
| 7 | header_4 | body_7 |
+----+----------+--------+
Where blank items (such as for the key header_10 in the dict above) would receive a value of None. I have tried a number of varieties for df.loc such as:
for header_name, body_list in all_unique.items():
for body_name in body_list:
metadata.loc[metadata.index[-1]] = [header_name, body_name]
To no avail. Surely there must be a quick way in Pandas to append rows and autoincrement the index? Something similar to the SQL INSERT INTO statement only using pythonic code?
Use dict comprehension for add Nones for empty lists and then flatten for list of tuples:
d = {'header_1': ['body_1', 'body_3', 'body_2'],
'header_2': ['body_6', 'body_4', 'body_5'],
'header_4': ['body_7', 'body_8'],
'header_3': ['body_9'],
'header_9': ['body_10'],
'header_10': []}
d = {k: v if bool(v) else [None] for k, v in d.items()}
data = [(k, y) for k, v in d.items() for y in v]
df = pd.DataFrame(data, columns= ['a','b'])
print (df)
a b
0 header_1 body_1
1 header_1 body_3
2 header_1 body_2
3 header_2 body_6
4 header_2 body_4
5 header_2 body_5
6 header_4 body_7
7 header_4 body_8
8 header_3 body_9
9 header_9 body_10
10 header_10 None
Another solution:
data = []
for k, v in d.items():
if bool(v):
for y in v:
data.append((k, y))
else:
data.append((k, None))
df = pd.DataFrame(data, columns= ['a','b'])
print (df)
a b
0 header_1 body_1
1 header_1 body_3
2 header_1 body_2
3 header_2 body_6
4 header_2 body_4
5 header_2 body_5
6 header_4 body_7
7 header_4 body_8
8 header_3 body_9
9 header_9 body_10
10 header_10 None
If the dataset is too big, this solution would be slow, but it should still work.
for key in data.keys():
vals= data[key]
# Create temp df with data from a single key
t_df = pd.DataFrame({'header':[key]*len(vals),'body':vals})
# Append it to your full dataframe.
df = df.append(t_df)
This is another unnesting problem again
Borrow Jez's setting up for your d
d = {k: v if bool(v) else [None] for k, v in d.items()}
1st convert your dict into dataframe
df=pd.Series(d).reset_index()
df.columns
Out[204]: Index(['index', 0], dtype='object')
Then using this function in here
yourdf=unnesting(df,[0])
yourdf
Out[208]:
0 index
0 body_1 header_1
0 body_3 header_1
0 body_2 header_1
1 body_6 header_2
1 body_4 header_2
1 body_5 header_2
2 body_7 header_4
2 body_8 header_4
3 body_9 header_3
4 body_10 header_9
5 None header_10
def unnesting(df, explode):
idx=df.index.repeat(df[explode[0]].str.len())
df1=pd.concat([pd.DataFrame({x:np.concatenate(df[x].values)} )for x in explode],axis=1)
df1.index=idx
return df1.join(df.drop(explode,1),how='left')

Resources