How to access list of list values in columns in dataset - python-3.x

In my DataFrame.I am having a list of list values in a column. For example, I am having columns as A, B, C, and my output column. In column A I'm having a value of 12 and in column B I am having values of 30 and in column C I am having a list of values like [0.01,1.234,2.31].When I try to find mean for all the list of list values.It shows list object as no attribute mean.How to convert all list of list values to mean in the dataframe?

You can transform the column which contains the lists to another DataFrame and calculate the mean.
import pandas as pd
df = ... # Original df
pd.DataFrame(df['column_with_lists'].values.tolist()).mean(1)
This would result in a pandas DataFrame which looks like the following:
0 mean_of_list_row_0
1 mean_of_list_row_1
. .
. .
. .
n mean_of_list_row_n

You can use apply(np.mean) on the column with the lists in it to get the mean. For example:
Build a dataframe:
import numpy as np
import pandas as pd
df = pd.DataFrame([[2,4],[4,6]])
df[3] = [[5,7],[8,9,10]]
print(df)
0 1 3
0 2 4 [5, 7]
1 4 6 [8, 9, 10]
Use apply(np.mean)
print(df[3].apply(np.mean))
0 6.0
1 9.0
If you want to convert that column into the mean of the lists:
df[3] = df[3].apply(np.mean)
print(df)
Name: 3, dtype: float64
0 1 3
0 2 4 6.0
1 4 6 9.0

Related

pandas expand dataframe column with tuples, into multiple columns and rows

I have a data frame where one column contains elements that are a list containing several tuples. I want to turn each tuple in to a column for each element and create a new row for each tuple. So this code shows what I mean and the solution I came up with:
import numpy as np
import pandas as pd
a = pd.DataFrame(data=[['a','b',[(1,2,3),(6,7,8)]],
['c','d',[(10,20,30)]]], columns=['one','two','three'])
df2 = pd.DataFrame(columns=['one', 'two', 'A', 'B','C'])
print(a)
for index,item in a.iterrows():
for xtup in item.three:
temp = pd.Series(item)
temp['A'] = xtup[0]
temp['B'] = xtup[1]
temp['C'] = xtup[2]
temp = temp.drop('three')
df2 = df2.append(temp)
print(df2)
The output is:
one two three
0 a b [(1, 2, 3), (6, 7, 8)]
1 c d [(10, 20, 30)]
one two A B C
0 a b 1 2 3
0 a b 6 7 8
1 c d 10 20 30
Unfortunately, my solution takes 2 hours to run on 55,000 rows! Is there a more efficient way to do this?
We do explode column then explode row
a=a.explode('three')
a=pd.concat([a,pd.DataFrame(a.pop('three').tolist(),index=a.index)],axis=1)
one two 0 1 2
0 a b 1 2 3
0 a b 6 7 8
1 c d 10 20 30

Sort pandas dataframe by a column

I have a pandas dataframe as below:
import pandas as pd
import numpy as np
import datetime
# intialise data of lists.
data = {'A' :[1,1,1,1,2,2,2,2],
'B' :[2,3,1,5,7,7,1,6]}
# Create DataFrame
df = pd.DataFrame(data)
df
I want to sort 'B' by each group of 'A'
Expected Output:
A B
0 1 1
1 1 2
2 1 3
3 1 5
4 2 1
5 2 6
6 2 7
7 2 7
You can sort a dataframe using the sort_values command. This command will sort your dataframe with priority on A and then B as requested.
df.sort_values(by=['A', 'B'])
Docs

How to access the items in list from Dataframe against each index item? [duplicate]

This question already has answers here:
Pandas expand rows from list data available in column
(3 answers)
Closed 3 years ago.
Consider the table (Dataframe) below.
Need each item in the list against its index such as given below. What are the possible ways of doing this in python?
Anybody can tweak the question if it matches the context.
You can do this using the pandas library with the explode method. Here is how your code would look -
import pandas as pd
df = [["A", [1,2,3,4]],["B",[9,6,4]]]
df = pd.DataFrame(df, columns = ['Index', 'Lists'])
print(df)
df = df.explode('Lists').reset_index(drop=True)
print(df)
Your output would be -
Index Lists
0 A [1, 2, 3, 4]
1 B [9, 6, 4]
Index Lists
0 A 1
1 A 2
2 A 3
3 A 4
4 B 9
5 B 6
6 B 4

List of Dataframe

I have 2 DF that have in common some elements, and differentiates on 1 data. These DF are added into a list with the function append.
How do i re organise the list into a new DF with the data put in columns ?
The 2 DF are like below and are added with append
import pandas as pd
a=[]
r1={'date' : ['2003-01-31','2003-01-31'],'name' :['mod','dom'],'fib' :[2,3]}
df1=pd.DataFrame(r1,columns=['date','name','fib'])
r2={'date' : ['2003-01-31','2003-01-31'],'name' :['dom','mod'],'bif' :[5,7]}
df2=pd.DataFrame(r2,columns=['date','name','bif'])
a.append(df1)
a.append(df2)
a
Then i map the list a in a new DF
z=pd.concat(map(pd.DataFrame,a))
z
How do i re organize z that needs only two rows ?
The output i expect is
r3={'date':['2003-01-31','2003-01-31'],'name' :['mod','dom'],'fib':[2,3],'bif':[7,5]}
pd.DataFrame(r3)
For the z , I would do:
z=pd.concat([i.set_index(['date','name']) for i in a],axis=1).reset_index()
print(z)
date name fib bif
0 2003-01-31 dom 3 5
1 2003-01-31 mod 2 7
Try using pd.merge
df1.merge(df2, on=['name', 'date'])
Results:
date name fib bif
0 2003-01-31 mod 2 7
1 2003-01-31 dom 3 5

Python/Pandas return column and row index of found string

I've searched previous answers relating to this but those answers seem to utilize numpy because the array contains numbers. I am trying to search for a keyword in a sentence in a dataframe ('Timeframe') where the full sentence is 'Timeframe for wave in ____' and would like to return the column and row index. For example:
df.iloc[34,0]
returns the string I am looking for but I am avoiding a hard code for dynamic reasons. Is there a way to return the [34,0] when I search the dataframe for the keyword 'Timeframe'
EDIT:
For check index need contains with boolean indexing, but then there are possible 3 values:
df = pd.DataFrame({'A':['Timeframe for wave in ____', 'a', 'c']})
print (df)
A
0 Timeframe for wave in ____
1 a
2 c
def check(val):
a = df.index[df['A'].str.contains(val)]
if a.empty:
return 'not found'
elif len(a) > 1:
return a.tolist()
else:
#only one value - return scalar
return a.item()
print (check('Timeframe'))
0
print (check('a'))
[0, 1]
print (check('rr'))
not found
Old solution:
It seems you need if need numpy.where for check value Timeframe:
df = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,'Timeframe'],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')})
print (df)
A B C D E F
0 a 4 7 1 5 a
1 b 5 8 3 3 a
2 c 4 9 5 6 a
3 d 5 4 7 9 b
4 e 5 2 1 2 b
5 f 4 Timeframe 0 4 b
a = np.where(df.values == 'Timeframe')
print (a)
(array([5], dtype=int64), array([2], dtype=int64))
b = [x[0] for x in a]
print (b)
[5, 2]
In case you have multiple columns where to look into you can use following code example:
import numpy as np
import pandas as pd
df = pd.DataFrame([[1,2,3,4],["a","b","Timeframe for wave in____","d"],[5,6,7,8]])
mask = np.column_stack([df[col].str.contains("Timeframe", na=False) for col in df])
find_result = np.where(mask==True)
result = [find_result[0][0], find_result[1][0]]
Then output for df and result would be:
>>> df
0 1 2 3
0 1 2 3 4
1 a b Timeframe for wave in____ d
2 5 6 7 8
>>> result
[1, 2]

Resources