Search for value in a panel - python-3.x

I'm using the pandas library, and have an instance of the panel object. I want to find the number of elements that are equal to 0. I tried using the count command thusly:
panel.count(0)
However this returns the number of df within the axis 0, and I want to find the number of elements within each df of the panel that are equal to zero. Is there any built-in command to do that? Can anyone help me?

You can use .sum() (and the axis argument controls which DataFrame slices you're summing over):
In [11]: p = pd.Panel([[[1, 1]], [[1, 2]], [[1, 2]]])
In [12]: (p == 1).sum(axis=0)
Out[12]:
0 1
0 3 1
In [13]: (p == 1).sum(axis=1)  # this is the default: .sum()
Out[13]:
0 1 2
0 1 1 1
1 1 0 0
In [14]: (p == 1).sum(axis=2)
Out[14]:
0 1 2
0 2 1 1
It might be you want to sum of this, the Series (I don't think you can do this part in one??):
In [15]: (p == 1).sum(axis=0).sum(axis=0)
Out[15]:
0 3
1 1
dtype: int64
To find the total number of items equal to 0, I'd use np.sum (though you could do .sum().sum().sum()):
In [21]: np.sum((p == 1).values)
Out[21]: 4
Note: surprisingly the .values is required here.

Related

Pandas Adding Column Maximum to the Original Dataframe [duplicate]

I have a dataframe with columns A,B. I need to create a column C such that for every record / row:
C = max(A, B).
How should I go about doing this?
You can get the maximum like this:
>>> import pandas as pd
>>> df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
>>> df
A B
0 1 -2
1 2 8
2 3 1
>>> df[["A", "B"]]
A B
0 1 -2
1 2 8
2 3 1
>>> df[["A", "B"]].max(axis=1)
0 1
1 8
2 3
and so:
>>> df["C"] = df[["A", "B"]].max(axis=1)
>>> df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
If you know that "A" and "B" are the only columns, you could even get away with
>>> df["C"] = df.max(axis=1)
And you could use .apply(max, axis=1) too, I guess.
#DSM's answer is perfectly fine in almost any normal scenario. But if you're the type of programmer who wants to go a little deeper than the surface level, you might be interested to know that it is a little faster to call numpy functions on the underlying .to_numpy() (or .values for <0.24) array instead of directly calling the (cythonized) functions defined on the DataFrame/Series objects.
For example, you can use ndarray.max() along the first axis.
# Data borrowed from #DSM's post.
df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
df
A B
0 1 -2
1 2 8
2 3 1
df['C'] = df[['A', 'B']].values.max(1)
# Or, assuming "A" and "B" are the only columns,
# df['C'] = df.values.max(1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
If your data has NaNs, you will need numpy.nanmax:
df['C'] = np.nanmax(df.values, axis=1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
You can also use numpy.maximum.reduce. numpy.maximum is a ufunc (Universal Function), and every ufunc has a reduce:
df['C'] = np.maximum.reduce(df['A', 'B']].values, axis=1)
# df['C'] = np.maximum.reduce(df[['A', 'B']], axis=1)
# df['C'] = np.maximum.reduce(df, axis=1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
np.maximum.reduce and np.max appear to be more or less the same (for most normal sized DataFrames)—and happen to be a shade faster than DataFrame.max. I imagine this difference roughly remains constant, and is due to internal overhead (indexing alignment, handling NaNs, etc).
The graph was generated using perfplot. Benchmarking code, for reference:
import pandas as pd
import perfplot
np.random.seed(0)
df_ = pd.DataFrame(np.random.randn(5, 1000))
perfplot.show(
setup=lambda n: pd.concat([df_] * n, ignore_index=True),
kernels=[
lambda df: df.assign(new=df.max(axis=1)),
lambda df: df.assign(new=df.values.max(1)),
lambda df: df.assign(new=np.nanmax(df.values, axis=1)),
lambda df: df.assign(new=np.maximum.reduce(df.values, axis=1)),
],
labels=['df.max', 'np.max', 'np.maximum.reduce', 'np.nanmax'],
n_range=[2**k for k in range(0, 15)],
xlabel='N (* len(df))',
logx=True,
logy=True)
For finding max among multiple columns would be:
df[['A','B']].max(axis=1).max(axis=0)
Example:
df =
A B
timestamp
2019-11-20 07:00:16 14.037880 15.217879
2019-11-20 07:01:03 14.515359 15.878632
2019-11-20 07:01:33 15.056502 16.309152
2019-11-20 07:02:03 15.533981 16.740607
2019-11-20 07:02:34 17.221073 17.195145
print(df[['A','B']].max(axis=1).max(axis=0))
17.221073

How to get duplicated values in a data frame when the column is a list?

Good morning!
I have a data frame with several columns. One of this columns, data, has lists as content. Below I show a little example (id is just an example with random information):
df =
id data
0 a [1, 2, 3]
1 h [3, 2, 1]
2 bf [1, 2, 3]
What I want is to get rows with duplicated values in column data, I mean, in this example, I should get rows 0 and 2, because the values in its column data are the same (list [1, 2, 3]). However, this can't be achieved with df.duplicated(subset = ['data']) due to list is an unhashable type.
I know that it can be done getting two rows and comparing data directly, but my real data frame can have 1000 rows or more, so I can't compare one by one.
Hope someone knows it!
Thanks you very much in advance!
IIUC, We can create a new DataFrame from df['data'] and then check with DataFrame.duplicated
You can use:
m = pd.DataFrame(df['data'].tolist()).duplicated(keep=False)
df.loc[m]
id data
0 a [1, 2, 3]
2 bf [1, 2, 3]
Expanding on Quang's comment:
Try
In [2]: elements = [(1,2,3), (3,2,1), (1,2,3)]
...: df = pd.DataFrame.from_records(elements)
...: df
Out[2]:
0 1 2
0 1 2 3
1 3 2 1
2 1 2 3
In [3]: # Add a new column of tuples
...: df["new"] = df.apply(lambda x: tuple(x), axis=1)
...: df
Out[3]:
0 1 2 new
0 1 2 3 (1, 2, 3)
1 3 2 1 (3, 2, 1)
2 1 2 3 (1, 2, 3)
In [4]: # Remove duplicate rows (Keeping the first one)
...: df.drop_duplicates(subset="new", keep="first", inplace=True)
...: df
Out[4]:
0 1 2 new
0 1 2 3 (1, 2, 3)
1 3 2 1 (3, 2, 1)
In [5]: # Remove the new column if not required
...: df.drop("new", axis=1, inplace=True)
...: df
Out[5]:
0 1 2
0 1 2 3
1 3 2 1

Drop a column in pandas if all values equal 1?

How do I drop columns in pandas where all values in that column are equal to a particular number? For instance, consider this dataframe:
df = pd.DataFrame({'A': [1, 1, 1, 1],
'B': [0, 1, 2, 3],
'C': [1, 1, 1, 1]})
print(df)
Output:
A B C
0 1 0 1
1 1 1 1
2 1 2 1
3 1 3 1
How would I drop the 1 columns so that the output is:
B
0 0
1 1
2 2
3 3
Use DataFrame.loc with test if at least one non 1 value by DataFrame.ne with DataFrame.any:
df1 = df.loc[:, df.ne(1).any()]
Or test for 1 by DataFrame.eq with DataFrame.all for all Trues per columns and inverted mask by ~:
df1 = df.loc[:, ~df.eq(1).all()]
print (df1)
B
0 0
1 1
2 2
3 3
EDIT:
One consideration is what do you want to happen if you have a column with Nan and 1 only?
Then replace NaNs to 0 by DataFrame.fillna and use same solution like before:
df1 = df.loc[:, df.fillna(0).ne(1).any()]
df1 = df.loc[:, ~df.fillna(0).eq(1).all()]
You can use any:
df.loc[:, df.ne(1).any()]
One consideration is what do you want to happen if you have a column with Nan and 1 only?
If you want to drop under this condition also, you will to either fillna with 1 or add or and new condition.
df = pd.DataFrame({'A': [1, 1, 1, 1],
'B': [0, 1, 2, 3],
'C': [1, 1, 1, np.nan]})
print(df)
A B C
0 1 0 1.0
1 1 1 1.0
2 1 2 1.0
3 1 3 NaN
All these leave that column with NaN and 1's.
df.loc[:, df.ne(1).any()]
df.loc[:, ~df.eq(1).all()]
So, you can add this addition to drop that column also.
df.loc[:, ~(df.eq(1) | df.isna()).all()]
Output:
B
0 0
1 1
2 2
3 3

Replace values in specified list of columns based on a condition

The actual use case is that I want to replace all of the values in some named columns with zero whenever they are less than zero, but leave other columns alone. Let's say in the dataframe below, I want to floor all of the values in column a and b to zero, but leave column d alone.
df = pd.DataFrame({'a': [0, -1, 2], 'b': [-3, 2, 1],
'c': ['foo', 'goo', 'bar'], 'd' : [1,-2,1]})
df
a b c d
0 0 -3 foo 1
1 -1 2 goo -2
2 2 1 bar 1
The second paragraph in the accepted answer to this question: How to replace negative numbers in Pandas Data Frame by zero does provide a workaround, I can just set the datatype of column d to be non-numeric, and then change it back again afterwards:
df['d'] = df['d'].astype(object)
num = df._get_numeric_data()
num[num <0] = 0
df['d'] = df['d'].astype('int64')
df
a b c d
0 0 0 foo 1
1 0 2 goo -2
2 2 1 bar 1
but this seems really messy, and it means I need to know the list of the columns I don't want to change, rather than the list I do want to change.
Is there a way to just specify the column names directly
You can use mask and column filtering:
df[['a','b']] = df[['a','b']].mask(df<0, 0)
df
Output
a b c d
0 0 0 foo 1
1 0 2 goo -2
2 2 1 bar 1
Using np.where
cols_to_change = ['a', 'b', 'd']
df.loc[:, cols_to_change] = np.where(df[cols_to_change]<0, 0, df[cols_to_change])
a b c d
0 0 0 foo 1
1 0 2 goo 0
2 2 1 bar 1

Sum and collapse two rows in pandas if two values are equal (order does not matter)

I am analyzing a dataset that has an Origin ID (Column A), a Destination ID (Column B), and how many trips have happened between them (Column Count). Now I want to sum the A-B trips with the B-A trips. This sum is the total number of trips between A and B.
Here is how my data looks like (it is not necessarily ordered in the same way):
In [1]: group_station = pd.DataFrame([[1, 2, 100], [2, 1, 200], [4, 6, 5] , [6, 4, 10], [1, 4, 70]], columns=['A', 'B', 'Count'])
Out[2]:
A B Count
0 1 2 100
1 2 1 200
2 4 6 5
3 6 4 10
4 1 4 70
And I want the following output:
A B C
0 1 2 300
1 4 6 15
4 1 4 70
I have tried groupby and setting the index to both variables with no success. Right now I am doing a very inefficient double loop, that is too slow for the size of my dataset.
If it helps this is the code for the double loop (I removed some efficiency modifications to make it more clear):
# group_station is the dataframe
collapsed_group_station = np.zeros(len(group_station), 3))
for i, row in enumerate(group_station.iterrows()):
start_id = row[0][0]
end_id = row[0][1]
count = row[1][0]
for check_row in group_station.iterrows():
check_start_id = check_row[0][0]
check_end_id = check_row[0][1]
check_time = check_row[1][0]
if start_id == check_end_id and end_id == check_start_id:
new_group_station[i][0] = start_id
new_group_station[i][1] = end_id
new_group_station[i][2] = time + check_time
break
I have ideas of how to make this code more efficient, but I wanted to know if there is a way of doing it without looping.
You can using np.sort with groupby.sum()
import numpy as np; import pandas as pd
group_station[['A','B']]=np.sort(group_station[['A','B']],axis=1)
group_station.groupby(['A','B'],as_index=False).Count.sum()
Out[175]:
A B Count
0 1 2 300
1 1 4 70
2 4 6 15

Resources