How do I convert alphanumeric to digits while keeping actual digits in the string intact - python-3.x

I want to convert a column that has alphanumeric values to digits
0 newyork2510 2
1 boston76w2 1
2 chicago785dw 1
3 san891dwn39210114 1
4 f2391rpg 1
so that
0 newyork2510 2
should look like
0 14523251518112510 2
similarly, rest of the whole second column.
I can only do the following which only helps me converting alphabets to digits
for character in input:
number = ord(character) - 96
output.append(number)
Can you help me?

Try replace with regex=True:
# map the char to string integers
char_map = {chr(i): str(j) for j,i in enumerate(range(ord('a'), ord('z')+1), 1)}
# apply the mapping to the column
df['col1'] = df['col1'].replace(char_map, regex=True)
Output:
col0 col1 col2
0 0 14523251518112510 2
1 1 2151920151476232 1
2 2 38931715785423 1
3 3 191148914231439210114 1
4 4 6239118167 1

Related

Replacing str by int for all the columns of dataframe without making dictionary for each column

Suppose I have the following dataframe,
d = {'col1':['a','b','c','a','c','c','c','c','c','c'],
'col2':['a1','b1','c1','a1','c1','c1','c1','c1','c1','c1'],
'col3':[1,2,3,2,3,3,3,3,3,3]}
data = pd.DataFrame(d)
I want to go through categorical columns and replace strings with integers. The usual way of doing this is to do:
col1 = {'a': 1,'b': 2, 'c':3}
data.col1 = [col1[item] for item in data.col1]
Namely to make a dictionary for each categorical column and do the replacement. But if you have many columns making dictionary for them one by one is time consuming, so I wonder if there is a better way of doing it? Also how can I do this without dictionary. In this example we can 3 distinct values on col1 for example but if we have many more we should have wrote all that by hand (say {'a': 1,'b': 2, 'c':3, ..., 'z':26}). I wonder what is the most efficient way of doing this? namely to go through all the categorical column and replace the string with numbers without needing to make dictionaries column by column?
Get only object columns first by DataFrame.select_dtypes and then for each column use factorize in DataFrame.apply:
cols = data.select_dtypes(object).columns
data[cols] = data[cols].apply(lambda x: pd.factorize(x)[0]) + 1
print (data)
col1 col2 col3
0 1 1 1
1 2 2 2
2 3 3 3
3 1 1 2
4 3 3 3
5 3 3 3
6 3 3 3
7 3 3 3
8 3 3 3
9 3 3 3
If possible, you could avoid the apply,by using a dictionary comprehension in the assign expression(I feel a dictionary is going to be more efficient; I may be wrong):
values = {col: data[col].factorize()[0] + 1
for col in data.select_dtypes(object)}
data.assign(**values)
col1 col2 col3
0 1 1 1
1 2 2 2
2 3 3 3
3 1 1 2
4 3 3 3
5 3 3 3
6 3 3 3
7 3 3 3
8 3 3 3
9 3 3 3

groupby and trim some rows based on condition

I have a data frame something like this:
df = pd.DataFrame({"ID":[1,1,2,2,2,3,3,3,3,3],
"IF_car":[1,0,0,1,0,0,0,1,0,1],
"IF_car_history":[0,0,0,1,0,0,0,1,0,1],
"observation":[0,0,0,1,0,0,0,2,0,3]})
I want output where I can trim rows in groupby with ID and condition on "IF_car_history" == 1
tried_df = df.groupby(['ID']).apply(lambda x: x.loc[:(x['IF_car_history'] == '1').idxmax(),:]).reset_index(drop = True)
I want to drop rows in a groupby by after i get ['IF_car_history'] == '1'
expected output:
Thanks
First compare values for mask m by Series.eq and then use GroupBy.cumsum, and for values before 1 compare by 0, last filter by boolean indexing, but because id necesary remove after last 1 is used swapped values by slicing with [::-1].
m = df['IF_car_history'].eq(1).iloc[::-1]
df1 = df[m.groupby(df['ID']).cumsum().ne(0).iloc[::-1]]
print (df1)
ID IF_car IF_car_history observation
2 2 0 0 0
3 2 1 1 1
5 3 0 0 0
6 3 0 0 0
7 3 1 1 2
8 3 0 0 0
9 3 1 1 3

How to replace the values of 1's and 0's of various column into a single column of a data frame?

The 0's and 1's need to be transposed to there appropriate headers in python.
How can I achieve this and get the column final_list?
If there is always only one 1 per rows use DataFrame.dot:
df = pd.DataFrame({'a':[0,1,0],
'b':[1,0,0],
'c':[0,0,1]})
df['Final'] = df.dot(df.columns)
print (df)
a b c Final
0 0 1 0 b
1 1 0 0 a
2 0 0 1 c
If possible multiple 1 also add separator and then remove it by Series.str.rstrip from output Series:
df = pd.DataFrame({'a':[0,1,0],
'b':[1,1,0],
'c':[1,1,1]})
df['Final'] = df.dot(df.columns + ',').str.rstrip(',')
print (df)
a b c Final
0 0 1 1 b,c
1 1 1 1 a,b,c
2 0 0 1 c

Pandas Flag Rows with Complementary Zeros

Given the following data frame:
import pandas as pd
df=pd.DataFrame({'A':[0,4,4,4],
'B':[0,4,4,0],
'C':[0,4,4,4],
'D':[4,0,0,4],
'E':[4,0,0,0],
'Name':['a','a','b','c']})
df
A B C D E Name
0 0 0 0 4 4 a
1 4 4 4 0 0 a
2 4 4 4 0 0 b
3 4 0 4 4 0 c
I'd like to add a new field called "Match_Flag" which labels unique combinations of rows if they have complementary zero patterns (as with rows 0, 1, and 2) AND have the same name (just for rows 0 and 1). It uses the name of the rows that match.
The desired result is as follows:
A B C D E Name Match_Flag
0 0 0 0 4 4 a a
1 4 4 4 0 0 a a
2 4 4 4 0 0 b NaN
3 4 0 4 4 0 c NaN
Caveat:
The patterns may vary, but should still be complementary.
Thanks in advance!
UPDATE
Sorry for the confusion.
Here is some clarification:
The reason why rows 0 and 1 are "complementary" is that they have opposite patterns of zeros in their columns; 0,0,0,4,4 vs, 4,4,4,0,0.
The number 4 is arbitrary; it could just as easily be 0,0,0,4,2 and 65,770,23,0,0. So if 2 such rows are indeed complementary and they have the same name, I'd like for them to be flagged with that same name under the "Match_Flag" column.
You can identify a compliment if it's dot product is zero and it's element wise sum is nowhere zero.
def complements(df):
v = df.drop('Name', axis=1).values
n = v.shape[0]
row, col = np.triu_indices(n, 1)
# ensure two rows are complete
# their sum contains no zeros
c = ((v[row] + v[col]) != 0).all(1)
complete = set(row[c]).union(col[c])
# ensure two rows do not overlap
# their product is zero everywhere
o = (v[row] * v[col] == 0).all(1)
non_overlap = set(row[o]).union(col[o])
# we are a compliment iff we do
# not overlap and we are complete
complement = list(non_overlap.intersection(complete))
# return slice
return df.Name.iloc[complement]
Then groupby('Name') and apply our function
df['Match_Flag'] = df.groupby('Name', group_keys=False).apply(complements)

How to reconstruct strings in "edit_distance_problem"?

Suppose you have given dp table for string X = "AGGGCT" and string Y = "AGGCA"
m = length of X + 1
n = length of Y + 1
0 1 2 3 4 5
1 0 1 2 3 4
2 1 0 1 2 3
dp[m][n] = 3 2 1 0 1 2
4 3 2 1 1 2
5 4 3 2 1 2
6 5 4 3 2 2
and you want to reconstruct three strings as follows
string row1 = "AGGGCT" ;
string row2 = "||| | " ;
string row3 = "AGG-CA" ;
How to recontruct strings row1, row2 and row3, if possible post code in C/C++/Java.
I think this page can be a good starting point:
http://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance#Java
You have to make a few modifications, but the core idea should be to store in the "min" which case was choosed for a given (i,j), and before the return you can walk through the matrix backwards starting by distance[str1.length()][str2.length()] step-by-step. If in the steps the distances are the same you show a |, if they differ but stepping diagonal then it was a change step, otherwise if vertical/horizontal a remove/add.
You can store this "backwards" information in a string and later display it in a reverse order.

Resources