I convert my dataframe values to str, but when I concatenate them together the previous ints are including trailing decimals.
df["newcol"] = df['columna'].map(str) + '_' + df['columnb'].map(str) + '_' + df['columnc'].map(str)
This is giving me output like
500.0 how can I get rid of this leading/trailing decimal? sometimes my data in column a will have non alpha numeric characters.
+---------+---------+---------+------------------+----------------------+
| columna | columnb | columnc | expected | currently getting |
+---------+---------+---------+------------------+----------------------+
| | -1 | 27 | _-1_27 | _-1.0_27.0 |
| | -1 | 42 | _-1_42 | _-1.0_42.0 |
| | -1 | 67 | _-1_67 | _-1.0_67.0 |
| | -1 | 95 | _-1_95 | _-1.0_95.0 |
| 91_CCMS | 14638 | 91 | 91_CCMS_14638_91 | 91_CCMS_14638.0_91.0 |
| DIP96 | 1502 | 96 | DIP96_1502_96 | DIP96_1502.0_96.0 |
| 106 | 11694 | 106 | 106_11694_106 | 00106_11694.0_106.0 |
+---------+---------+---------+------------------+----------------------+
Error:
invalid literal for int() with base 10: ''
Edit:
If your df has more than 3 columns, and you want to join only 3 columns, you may specify those columns in the command using columns slicing. Assume your df has 5 columns named as : AA, BB, CC, DD, EE. You want only joining columns CC, DD, EE. You just need to specify those 3 columns before the fillna, and assign the result to newcol as you want:
df["newcol"] = df[['CC', 'DD', 'EE']].fillna('') \
.applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1)
Note: I just break command into 2 lines using '\' for easy reading.
Original:
I guess your real data of columna columnb columnc contain str, float, int, empty space, blank space, and maybe even NaN.
Float with decimal values = .00 in a column dtype object will show without decimal.
Assume your df has only 3 columns: colmna, columnb, columnc as you said. Using command below will handle: str, float, int, NaN and joining 3 columns into one as you want:
df.fillna('').applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1)
I created a sample similar as yours
columna columnb columnc
0 -1 27
1 NaN -1 42
2 -1 67
3 -1 95
4 91_CCMS 14638 91
5 DIP96 96
6 106 11694 106
Using your command returns the concatenated string having '.0' as you described
df['columna'].map(str) + '_' + df['columnb'].map(str) + '_' + df['columnc'].map(str)
Out[1926]:
0 _-1.0_27.0
1 nan_-1.0_42.0
2 _-1.0_67.0
3 _-1.0_95.0
4 91_CCMS_14638_91
5 DIP96__96
6 106_11694_106
dtype: object
Using my command:
df.fillna('').applymap(lambda x: x if isinstance(x, str) else str(int(x))).agg('_'.join, axis=1)
Out[1927]:
0 _-1_27
1 _-1_42
2 _-1_67
3 _-1_95
4 91_CCMS_14638_91
5 DIP96__96
6 106_11694_106
dtype: object
I couldn't reproduce this error but maybe you could try something like:
df["newcol"] = df['columna'].map(lambda x: str(int(x)) if isinstance(x, int) else str(x)) + '_' + df['columnb'].map(lambda x: str(int(x))) + '_' + df['columnc'].map(lambda x: str(int(x)))
Related
I have a dataframe in dynamic format for each ID
df:
ID |Start Date|End date |claim_no|claim_type|Admission_date|Discharge_date|Claim_amt|Approved_amt
10 |01-Apr-20 |31-Mar-21| 1123 |CSHLESS | 23-Aug-2020 | 25-Aug-2020 | 25406 | 19351
10 |01-Apr-20 |31-Mar-21| 1212 |POSTHOSP | 30-Aug-2020 | 01-Sep-2020 | 4209 | 3964
10 |01-Apr-20 |31-Mar-21| 1680 |CSHLESS | 18-Mar-2021 | 23-Mar-2021 | 18002 | 0
11 |12-Dec-20 |11-Dec-21| 1503 |CSHLESS | 12-Jan-2021 | 15-Jan-2021 | 76137 | 50286
11 |12-Dec-20 |11-Dec-21| 1505 |CSHLESS | 05-Jan-2021 | 07-Jan-2021 | 30000 | 0
Based on the ID column i am trying to convert all the dynamic variables into a static format so that i can have a single row for each ID.
Columns such as ID, Start Date,End date are static in nature and rest of the columns are dynamic in nature for each ID.
Inorder to acheive the below output:
ID |Start Date|End date |claim_no_1|claim_type_1|Admission_date_1|Discharge_date_1|Claim_amt_1|Approved_amt_1|claim_no_2|claim_type_2|Admission_date_2|Discharge_date_2|Claim_amt_2|Approved_amt_2|claim_no_3|claim_type_3|Admission_date_3|Discharge_date_3|Claim_amt_3|Approved_amt_3
10 |01-Apr-20 |31-Mar-21| 1123 |CSHLESS | 23-Aug-2020 | 25-Aug-2020 | 25406 | 19351 | 1212 |POSTHOSP | 30-Aug-2020 | 01-Sep-2020 | 4209 | 3964 | 1680 |CSHLESS | 18-Mar-2021 | 23-Mar-2021 | 18002 | 0
i am using the below code:
# Index columns
idx = ['ID', 'Start Date', 'End date']
# Sequential counter to identify unique rows per index columns
cols = df.groupby(idx).cumcount() + 1
# Reshape using stack and unstack
df_out = df.set_index([*idx, cols]).stack().unstack([-2, -1])
# Flatten the multiindex columns
df_out.columns = df_out.columns.map('{0[1]}_{0[0]}'.format)
but it throws a ValueError: Unstacked DataFrame is too big, causing int32 overflow
Try this:
Index columns (very similar to your code)
idx = ['ID', 'Start Date', 'End date']
# Sequential counter to identify unique rows per index columns
df['nrow'] = df.groupby(idx)['claim_no'].transform('rank')
df['nrow'] = df['nrow'].astype(int).astype(str)
instead of stack & unstack. Using these functions you can have better control over columns
df1 = pd.melt(df, id_vars =['nrow', *idx] , value_vars=['claim_no', 'claim_type', 'Admission_date',
'Discharge_date', 'Claim_amt', 'Approved_amt'],
value_name='var'
)
df2 = df1.pivot(index=[*idx],
columns=['variable', 'nrow'], values='var')
df2.columns = ['_'.join(col).rstrip('_') for col in df2.columns.values]
print(df2)
claim_no_1 claim_no_2 claim_no_3 claim_type_1 claim_type_2 claim_type_3 Admission_date_1 Admission_date_2 Admission_date_3 Discharge_date_1 Discharge_date_2 Discharge_date_3 Claim_amt_1 Claim_amt_2 Claim_amt_3 Approved_amt_1 Approved_amt_2 Approved_amt_3
ID Start Date End date
10 01-Apr-20 31-Mar-21 1123 1212 1680 CSHLESS POSTHOSP CSHLESS 23-Aug-2020 30-Aug-2020 18-Mar-2021 25-Aug-2020 01-Sep-2020 23-Mar-2021 25406 4209 18002 19351 3964 0
11 12-Dec-20 11-Dec-21 1503 1505 NaN CSHLESS CSHLESS NaN 12-Jan-2021 05-Jan-2021 NaN 15-Jan-2021 07-Jan-2021 NaN 76137 30000 NaN 50286 0 NaN
I need to create a new column as Billing and Non-Billing based on the Billable column. If the Billable is 'Yes' then i should create a new column as Billing and if its 'No' then need to create a new column as 'Non-Billable' and need to calculate it. Calculation should be in row axis.
Calculation for Billing in row:
Billing = df[Billing] * sum/168 * 100
Calculation for Non-Billing in row:
Non-Billing = df[Non-Billing] * sum/ 168 * 100
Data
Employee Name | Java | Python| .Net | React | Billable|
----------------------------------------------------------------
|Priya | 10 | | 5 | | Yes |
|Krithi | | 10 | 20 | | No |
|Surthi | | 5 | | | yes |
|Meena | | 20 | | 10 | No |
|Manju | 20 | 10 | 10 | | Yes |
Output
I have tried using insert statement but i cannot keep on inserting it. I tried append also but its not working.
Bill_amt = []
Non_Bill_amt = []
for i in df['Billable']:
if i == "Yes" or i == None:
Bill_amt = (df[Bill_amt].sum(axis=1)/168 * 100).round(2)
df.insert (len( df.columns ), column='Billable Amount', value=Bill_amt )#inserting the column and it name
#CANNOT INSERT ROW AFTER IT AND CANNOT APPEND IT TOO
else:
Non_Bill_amt = (DF[Non_Bill_amt].sum ( axis=1 ) / 168 * 100).round ( 2 )
df.insert ( len ( df.columns ), column='Non Billable Amount', value=Non_Bill_amt ) #inserting the column and its name
#CANNOT INSERT ROW AFTER IT.
Use .sum(axis=1) and then np.where() to put the values in respective columns. For example:
x = df.loc[:, "Java":"React"].sum(axis=1) / 168 * 100
df["Bill"] = np.where(df["Billable"].str.lower() == "yes", x, "")
df["Non_Bill"] = np.where(df["Billable"].str.lower() == "no", x, "")
print(df)
Prints:
Employee_Name Java Python .Net React Billable Bill Non_Bill
0 Priya 10.0 NaN 5.0 NaN Yes 8.928571428571429
1 Krithi NaN 10.0 20.0 NaN No 17.857142857142858
2 Surthi NaN 5.0 NaN NaN yes 2.976190476190476
3 Meena NaN 20.0 NaN 10.0 No 17.857142857142858
4 Manju 20.0 10.0 10.0 NaN Yes 23.809523809523807
I have a df like this:
A | B | C | D
14 | 5 | 10 | 5
4 | 7 | 15 | 6
100 | 220 | 6 | 7
For each row in column A,B,C, I want the find the max value and from it subtract column D and replace it.
Expected result:
A | B | C | D
9 | 5 | 10 | 5
4 | 7 | 9 | 6
100 | 213 | 6 | 7
So for the first row, it would select 14(the max out of 14,5,10), subtract column D from it (14-5 =9) and replace the result(replace initial value 14 with 9)
I know how to find the max value of A,B,C and from it subctract D, but I am stucked on the replacing part.
I tought on putting the result in another column called E, and then find again the max of A,B,C and replace with column E, but that would make no sense since I would be attempting to assign a value to a function call. Is there any other option to do this?
#Exmaple df
list_columns = ['A', 'B', 'C','D']
list_data = [ [14, 5, 10,5],[4, 7, 15,6],[100, 220, 6,7]]
df= pd.DataFrame(columns=list_columns, data=list_data)
#Calculate the max and subctract
df['e'] = df[['A', 'B']].max(axis=1) - df['D']
#To replace, maybe something like this. But this line makes no sense since it's backwards
df[['A', 'B','C']].max(axis=1) = df['D']
Use DataFrame.mask for replace only maximal value matched by compare all values of filtered columns with maximals:
cols = ['A', 'B', 'C']
s = df[cols].max(axis=1)
df[cols] = df[cols].mask(df[cols].eq(s, axis=0), s - df['D'], axis=0)
print (df)
A B C D
0 9 5 10 5
1 4 7 9 6
2 100 213 6 7
I have this DataFrame column
+-------------------------------------+--+
| df: | |
+-------------------------------------+--+
| Index Ticket* | |
| 0 254326 | |
| 1 CA345 | |
| 3 SA12 | |
| 4 267891 | |
| ' ' | |
| ' ' | |
| ' ' | |
| 700 CA356 | |
+-------------------------------------+--+
It contains two kinds of values. Some are pure numbers and others are strings having letters and numbers.
Many rows have the same letters (CA345, CA675 etc). I would like to group and label the rows with same letters with the same numbers.
Eg. All rows having "CA" labelled as 0, all rows having "SA" labelled as 1.
Remaining rows all have six digit numbers (no letters in them). I would like to label all such rows with the same number (say 2 for example)
1st Approach
Define a custom function, check if the row isinstance(val, str) and contains "SA" or "CA"
def label_ticket(row):
if isinstance(row['Ticket'], str) and 'CA' in row['Ticket']:
return 0
if isinstance(row['Ticket'], str) and 'SA' in row['Ticket']:
return 1
return 2
Apply the custom function to new column df('Label').
df['Label'] = df.apply(label_ticket, axis=1)
print(df)
Ticket Label
0 254326 2
1 CA345 0
2 SA12 1
3 267891 2
700 CA356 0
2nd Approach
Further understanding the situation, it seems you have no idea what instances will come up in df['Ticket']. In this case you can use re.split() to search all string pattern and classify them into category accordingly.
import pandas as pd
import re
df = pd.DataFrame(columns=['Ticket'],
data=[[254326],
['CA345'],
['SA12'],
[267891],
['CA356']])
df['Pattern'] = df['Ticket'].apply(lambda x: ''.join(re.split("[^a-zA-Z]*", str(x))))
df_label = pd.DataFrame(df['Pattern'].unique(), columns=['Pattern']).reset_index(level=0).rename(columns={'index': 'Label'})
df = df.merge(df_label, how='left')
print(df)
Ticket Pattern Label
0 254326 0
1 CA345 CA 1
2 SA12 SA 2
3 267891 0
4 CA356 CA 1
I have not enough knowledge of python but
you may have try pandas.Series.str.extract
and
regular expression
Like:
ptrn=r'(?P<CA>(CA[\d]+))|(?P<SA>(SA[\d]+))|(?P<DIGIT>[\d]{6})'
import pandas as pd
import numpy as np
ls={'tk':[ '254326' , 'CA345', 'SA12' , '267891' , 'CA356' ]}
df = pd.DataFrame(ls)
s=df['tk'].str.extract(ptrn,expand=False)
newDf={0:[x for x in s['CA'] if pd.isnull(x)==False],1:[x for x in s['SA'] if pd.isnull(x)==False],2:[x for x in s['DIGIT'] if pd.isnull(x)==False]}
print(newDf)
out put:
{0: ['CA345', 'CA356'], 1: ['SA12'], 2: ['254326', '267891']}
demo
I have a dictionary which has names as keys and numbers as values. I want to make a list with the values of the dictionary that are closer to each other. All values represent a cell in an imaginary 5x5 grid. So I want to check which 2 values are closer to each other in the grid.
Ex.
my_dict = {Mark:2, Luke:6, Ferdinand:10, Martin:20, Marvin: 22}
I would want to get the values Martin and Marvin because its values are closer to each other.
This will work for any size dictionary and get you the pair with the smallest value. It uses itertools to go through all combinations.
from itertools import combinations
my_dict = {'Mark': 2, 'Luke': 6, 'Ferdinand': 7, 'Martin': 20, 'Marvin': 22}
for value in combinations(my_dict.items(), 2):
current_diff = abs(value[0][1] - value[1][1])
pair_of_interest = (value[0][0], value[1][0])
try:
if current_diff < difference:
difference = current_diff
pair = pair_of_interest
except NameError:
difference = current_diff
pair = pair_of_interest
print("{0} and {1} have the smallest distance of {2}".format(pair[0], pair[1], difference))
I assume values in dictionary is like this.
+----+----+----+----+-------->x
| 1 | 2 | 3 | 4 | 5 |
+----+----+----+----+----+
| 6 | 7 | 8 | 9 | 10 |
+----+----+----+----+----+
| 11 | 12 | 13 | 14 | 15 |
+----+----+----+----+----+
| 16 | 17 | 18 | 19 | 20 |
+----+----+----+----+----+
| 21 | 22 | 23 | 24 | 25 |
+----+----+----+----+----+
|
v
y
ie:
1 => (y,x)=(0,0)
2 => (y,x)=(0,1)
...
24 => (y,x)=(4,3)
25 => (y,x)=(4,4)
source code:
import itertools
my_dict = {'Mark':2, 'Luke':6, 'Ferdinand':10, 'Martin':20, 'Marvin': 22}
val2vec = lambda v: (v/5, v%5)
name2vec = lambda name: val2vec(my_dict[name])
vec2dis2 = lambda vec1, vec2: (vec2[0] - vec1[0])**2 + (vec2[1] - vec1[1])**2 #Use 'math.sqrt' if you want.
for dis2, grp in sorted((vec2dis2(name2vec(name1), name2vec(name2)), (name1, name2)) for name1, name2 in itertools.combinations(my_dict.iterkeys(), 2)):
print str(grp).ljust(30), "distance^2 =", dis2
output:
('Luke', 'Ferdinand') distance^2 = 2
('Luke', 'Mark') distance^2 = 2
('Ferdinand', 'Martin') distance^2 = 4
('Martin', 'Marvin') distance^2 = 4
('Ferdinand', 'Mark') distance^2 = 8
('Ferdinand', 'Marvin') distance^2 = 8
('Luke', 'Martin') distance^2 = 10
('Luke', 'Marvin') distance^2 = 10
('Marvin', 'Mark') distance^2 = 16
('Martin', 'Mark') distance^2 = 20