Keeping columns of pandas dataframe whose substring is in the list - python-3.x

I have a dataframe with many columns. I only want to retain those columns whose substring is in the list. For example the lst and dataframe is:
lst = ['col93','col71']
sample_id. col9381.3 col8371.8 col71937.9 col19993.1
1
2
3
4
Based on the substrings, the resulting dataframe will look like:
sample_id. col9381.3 col71937.9
1
2
3
4
I have a code that go through the list and filter out the columns for whom I have a substring in a list but I don't know how to create a dataframe for it. The code so far:
for i in lst:
df2 = df1.filter(regex=i)
if df2.shape[1] > 0:
print(df2)
The above code is able to filter out the columns but I don't know how combine all of these into one dataframe. Insights will be appreciated.

Try with startswith which accepts a tuple of options:
df.loc[:, df.columns.str.startswith(('sample_id.',)+tuple(lst))]
Or filter which accepts a regex as you were trying:
df.filter(regex='|'.join(['sample_id']+lst))
Output:
sample_id. col9381.3 col71937.9
0 1 NaN NaN
1 2 NaN NaN
2 3 NaN NaN
3 4 NaN NaN

Related

How to unmerge cells and create a standard dataframe when reading excel file?

I would like to convert this dataframe
into this dataframe
So far reading excel the standard way gives me the following result.
df = pd.read_excel(folder + 'abcd.xlsx', sheet_name="Sheet1")
Unnamed: 0 Unnamed: 1 T12006 T22006 T32006 \
0 Casablanca Global 100 97.27252 93.464538
1 NaN RĂ©sidentiel 100 95.883979 92.414063
2 NaN Appartement 100 95.425152 91.674379
3 NaN Maison 100 101.463607 104.039383
4 NaN Villa 100 102.45132 101.996932
Thank you
You can try method .fillna() with parameter method='ffill'. According to the pandas documentation for the ffill method: ffill: propagate last valid observation forward to next valid backfill.
So, your code would be like:
df.fillna(method='ffill', inplace=True)
And change name of 0 and 1 columns with this lines:
df.columns.values[0] = "City"
df.columns.values[1] = "Type"

Pandas append returns DF with NaN values

I'm appending data from a list to pandas df. I keep getting NaN in my entries.
Based on what I've read I think I might have to mention the data type for each column in my code.
dumps = [];features_df = pd.DataFrame()
for i in range (int(len(ids)/50)):
dumps = sp.audio_features(ids[i*50:50*(i+1)])
for i in range (len(dumps)):
print(list(dumps[0].values()))
features_df = features_df.append(list(dumps[0].values()), ignore_index = True)
Expected results, something like-
[0.833, 0.539, 11, -7.399, 0, 0.178, 0.163, 2.1e-06, 0.101, 0.385, 99.947, 'audio_features', '6MWtB6iiXyIwun0YzU6DFP', 'spotify:track:6MWtB6iiXyIwun0YzU6DFP', 'https://api.spotify.com/v1/tracks/6MWtB6iiXyIwun0YzU6DFP', 'https://api.spotify.com/v1/audio-analysis/6MWtB6iiXyIwun0YzU6DFP', 149520, 4]
for one row.
Actual-
danceability energy ... duration_ms time_signature
0 NaN NaN ... NaN NaN
1 NaN NaN ... NaN NaN
2 NaN NaN ... NaN NaN
3 NaN NaN ... NaN NaN
4 NaN NaN ... NaN NaN
5 NaN NaN ... NaN NaN
For all rows
append() strategy in a tight loop isn't a great way to do this. Rather, you can construct an empty DataFrame and then use loc to specify an insertion point. The DataFrame index should be used.
For example:
import pandas as pd
df = pd.DataFrame(data=[], columns=['n'])
for i in range(100):
df.loc[i] = i
print(df)
time python3 append_df.py
n
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
real 0m13.178s
user 0m12.287s
sys 0m0.617s
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html
Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.

How to find all indices in a 2-D matrix where pandas data frame has non-NA values

I am using a large 2-D pandas dataframe for collaborative filtering algorithm and want to create a list of lists which provides indices of all the non-NA values from a 2-D pandas Dataframe.
The 2-D dataframe is like:
0 1 2
0 1 NA 2
1 NA 5 NA
2 NA NA NA
3 1 3 4
4 4 NA 1
The Expected output list is follows (I made a mistake in mentioning the expected output in origional post and have corrected it):
[[0,0], [0,2], [1,1], [3,0], [3,1], [3,2], [4,0], [4,2]]
Your expected output contains indices where there are NaN values.
To get them, use: np.argwhere(np.isnan(df.values)).tolist()
Or if you want indices of not NaN elements, use:
np.argwhere(~np.isnan(df.values)).tolist()

Remove "x" number of characters from a string in a pandas dataframe?

I have a pandas dataframe df looking like this:
a b
thisisastring 5
anotherstring 6
thirdstring 7
I want to remove characters from the left of the strings in column a based on the number in column b. So I tried:
df["a"] = d["a"].str[df["b"]:]
But this will result in:
a b
NaN 5
NaN 6
NaN 7
Instead of:
a b
sastring 5
rstring 6
ring 7
Any help? Thanks in advance!
Using zip with string slice
df.a=[x[y:] for x,y in zip(df.a,df.b)]
df
Out[584]:
a b
0 sastring 5
1 rstring 6
2 ring 7
You can do it with apply, to apply this row-wise:
df.apply(lambda x: x.a[x.b:],axis=1)
0 sastring
1 rstring
2 ring
dtype: object

Pandas take value from columns if not NaN

Given the following data frame:
import pandas as pd
import numpy as np
df = pd.DataFrame({'A':['One','Two',np.nan],
'B':[np.nan,np.nan,'Three'],
})
df
A B
0 One NaN
1 Two NaN
2 NaN Three
I'd like to create one column ('C') that takes the value of either 'A' or 'B' if it is not NaN like this:
A B C
0 One NaN One
1 Two NaN Two
2 NaN Three Three
Thanks in advance!
You can use combine_first:
df['C'] = df.A.combine_first(df.B)
print df
A B C
0 One NaN One
1 Two NaN Two
2 NaN Three Three
Or fillna:
df['C']= df.A.fillna(df.B)
print df
A B C
0 One NaN One
1 Two NaN Two
2 NaN Three Three
Or np.where and add value if both conditions are False e.g. 1:
df['C'] = np.where(df.A.notnull(), df.A,np.where(df.B.notnull(), df.B, 1))
print df
A B C
0 One NaN One
1 Two NaN Two
2 NaN Three Three

Resources