I am trying to implement a formula to create a new column in Dataframe using existing column but that column is a summation from 0 to a number present in some other column.
I was trying something like this:
dataset['B']=sum([1/i for i in range(dataset['A'])])
I know something like this would work
dataset['B']=sum([1/i for i in range(10)])
but I want to make this 10 dynamic based on some different column.
I keep on getting this error.
TypeError: 'Series' object cannot be interpreted as an integer
First of all, I should admit that I could not understand you question completely. However, what I understood that you want to iterate over the rows of a DataFrame and make a new column by doing some operation/s on that value.
Is that is so, then I would recommend you following link
Regarding TypeError: 'Series' object cannot be interpreted as an integer:
The init signature range() takes integers as input. i.e [i for i in range(10)] should give you [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. However, if one of the value from your dataset['A'] is float, or not integer , this might result in the error you are having. Moreover, if you notice, the first value is a zero, as a result, 1/i should result in a different error. As a result, you might have to rewrite the code as [1/i for i in range (1 , row_value_of_dataset['A'])]
It will be highly appreciate if you could make an example of what you DataFrame might look like and what is your desired output. Then perhaps it is easier to post a solution.
BTW forgot to post what I understood from your question:
#assume the data:
>>>import pandas as pd
>>>data = pd.DataFrame({'A': (1, 2, 3, 4)})
#the data
>>>data
A
0 1
1 2
2 3
3 4
#doing operation on each of the rows
>>>data['B']=data.apply(lambda row: sum([1/i for i in range(1, row.A)] ), axis=1)
# Column B is the newly added data
>>>data
A B
0 1 0.000000
1 2 1.000000
2 3 1.500000
3 4 1.833333
Perhaps explicitly use cumsum, or even apply?
Anyway trying to move an array/list item directly into a dataframe and seem to view this as a dictionary. Try something like this, I've not tested it,
array_x = [x, 1/x for x in dataset.values.tolist()] # or `dataset.A.tolist()`
df = pd.DataFrame(data=(np.asarray(array_x)))
df.columns = [A, B]
Here the idea is to break the Series apart into a list, and input the list into a dataframe. This can be explicitly done without needing to go Series->list->dataframe and is not very efficient.
Related
I have a problem where I would like to count the number of times the current value has not changed in a dataframe over rolling periods.
For example:
df = pd.DataFrame({'col':list('aaaabbab')})
would somehow give output of
0
1
2
3
0
1
0
0
I have been trying something along the following
df['col'] = df['col'] == df['col'].shift(1)
df.rolling(window=3).sum().reset_index(drop=True, level=0)
I have added in the rolling as I will want to look at the full data set in terms of rolling periods but even without having it over rolling periods I can not quite figure out the logic.
I am not sure if I am missing something simple or this may not be possible using shift
You need to generate a grouper for the change in values. For this compare each value with the previous one and apply a cumsum. This gives you groups in the itertools.groupby style ([1, 1, 1, 1, 2, 2, 3, 4]), finally group and apply a cumcount.
df['count'] = (df.groupby(df['col'].ne(df['col'].shift()).cumsum())
.cumcount()
)
output:
col count
0 a 0
1 a 1
2 a 2
3 a 3
4 b 0
5 b 1
6 a 0
7 b 0
edit: for fun here is a solution using itertools (much faster):
from itertools import groupby, chain
df['count'] = list(chain(*(list(range(len(list(g))))
for _,g in groupby(df['col']))))
NB. this runs much faster (88 µs vs 707 µs on the provided example)
I can't comment so just to add some more to #mozway answer.
My goal was to count consecutives value for an entire huge dataframe effectively.
The pb I encounter is that by construction
np.nan == np.nan
will return False so you could have a whole column full of only NaN and yet the counter will be at 0.
A simple workaround would be to replace all NaN in your df by a value not already in it.
For instance in the case of a float dataset you could do
df.fillna('NA')
which will work but by changing the dtype of your columns to Object the following code will be much slower (20x on my set up).
I would rather advised something like :
all_values = list(np.unique(np.array(df)))
all_values = [a for a in all_values if a==a]
unik_val = min(all_values)-1
temp = df.fillna(unik_val).copy()
from itertools import groupby, chain
for col in temp.columns:
temp[col] = list(chain(*(list(range(len(list(g))))
for _,g in groupby(temp[col]))))
count_df
I have 5 columns contains [ Voltage,Bus,Load,load_Values,transmission, transmission_Values]. all the column name with Values contain numerical value based on their corresponding value.The csv files looks like that below
Voltage Bus Load load_Values transmission transmission_Values
Voltage(1) 2 load(1) 3 transmission(1) 2
Voltage(2) 2 load(2) 4 transmission(2) 3
Voltage(5) 3 load(3) 5 transmission(3) 5
I have to fetch value of Bus based on Transmission and load. for example
To get the value of bus. First, I need to fetch the value of transmission(2) which is 3. Now based on this value, I need to get the value of load which is load(3)=5.Next, Based on this value, I have to get the value of Voltage(5) which is 3.
I tried to get the value of single column based on the their corresponding column value.
total=df[df['load']=='load(1)']['load_Values']
next_total= df[df['transmission']=='transmission['total']']['transmission_Values']
v_total= df[df['Voltage']=='Voltage(5)']['Voltage_Values']
How to get all these values automatically. For example, if i have 1100 values in every column, How I can fetch all the values for 1100 in these columns.
This is how dataset looks like
So to get the Value of VRES_LD which is new column. For that I have to look for the I__ND_LD Column which has value I__ND_LD(1) and corressponding value stored in I__ND_LD_Values which is 10. Once I get the value 10 now based on that I ahve to Look for I__BS_ND column which has I__BS__ND(10) and its value is 5.0 in I__BS_ND_Values. Based on this value, I have to find the value of V_BS(5) which is 0.986009. Now this value should be store in the new column VRES_LD. Please let me know if you get it now.
I generalized your solution so you can work with as many values as you want.
I changed the name "Load_Value" to "load_value_name" to avoid confusion since there is a variable named "load_value" in lowercase.
You can start with as many values as you want; in our example we start with "1":
start_values = [1]
load_value_name = [f"^I__ND_LD({n})" for n in start_values]
#Output: but you'll have more than one if needed
['^I__ND_LD(1)']
Then we fetch all the values:
load_values=df[df['I__ND_LD'].isin(load_names)]['I__ND_LD_Values'].values.astype(np.int)
#output: again, more if needed
array([10])
let's get the bus names:
bus_names = [f"^I__BS_ND({n})" for n in load_values]
bus_values = df[df['I__BS_ND'].isin(bus_names)]['I__BS_ND_Values'].values.astype(np.int)
#output
array([5])
And finally voltage:
voltage_bus_value = [f"^V_BS({n})" for n in bus_values]
voltage_values = df[df['V_BS'].isin(voltage_names)]['V_BS_Values'].values
#output
array([0.98974069])
Notes:
Instead of rounding I downcasted to int; and .isin() method looks for all occurances so you can fetch all of the values.
If I understand correctly, you should be able to create key/value tables and use merge. The step to voltage is a little unclear, but the basic idea below should work, I think:
df = pd.DataFrame({'voltage': {0: 'Voltage(1)', 1: 'Voltage(2)', 2: 'Voltage(5)'},
'bus': {0: 2, 1: 2, 2: 3},
'load': {0: 'load(1)', 1: 'load(2)', 2: 'load(3)'},
'load_values': {0: 3, 1: 4, 2: 5},
'transmission': {0: 'transmission(1)',
1: 'transmission(2)',
2: 'transmission(3)'},
'transmission_values': {0: 2, 1: 3, 2: 5}})
load = df[['load', 'load_values']].copy()
trans = df[['transmission','transmission_values']].copy()
load['load'] = load['load'].str.extract('(\d)').astype(int)
trans['transmission'] = trans['transmission'].str.extract('(\d)').astype(int)
(df[['bus']].merge(trans, how='left', left_on='bus', right_on='transmission')
.merge(load, how='left', left_on='transmission_values', right_on='load'))
resulting in:
bus transmission transmission_values load load_values
0 2 2 3 3.0 5.0
1 2 2 3 3.0 5.0
2 3 3 5 NaN NaN
I think you need to do 3 things.
1.You need to put a number inside a string. You do it like this:
n_cookies = 3
f"I want {n_cookies} cookies"
#Output
I want 3 cookies
2.Let's say the values you need to fetch are:
transmission_values = [2,5,20]
You than need to fetch those load values:
load_values_to_fetch = [f"transmission({n})" for n in transmission_values]
#output
[transmission(2),transmission(5),transmission(20)]
3.Get all the voltage values from the df. Use .isin() method:
voltage_value= df[df['Voltage'].isin(load_values_to_fetch )]['Voltage_Values'].values
I hope I understood the problem correctly. Try and let us know because I can't try the code without data
I have a Pandas dataframe named data_match. It contains columns '_worker_id', '_unit_id', and 'caption'. (Please see attached screenshot for some of the rows in this dataframe)
Let's say the index column is not in ascending order (I want the index to be 0, 1, 2, 3, 4...n) and I want it to be in ascending order. So I ran the following function attempting to reset the index column:
data_match=data_match.reset_index(drop=True)
I was able to get the function to return the correct output in my computer using Python 3.6. However, when my coworker ran that function in his computer using Python 3.6, the '_worker_id' column got removed.
Is this due to the (drop=True) clause next to reset_index? But I didn't know why it worked in my computer and not in my coworker's computer. Can anybody advise?
As the saying goes, "What happens in your interpreter stays in your
interpreter". It's impossible to explain the discrepancy without seeing the
full history of commands entered into both Python interactive sessions.
However, it is possible to venture a guess:
df.reset_index(drop=True)
drops the current index of the DataFrame and replaces it with an index of
increasing integers. It never drops columns.
So, in your interactive session, _worker_id was a column. In your co-worker's
interactive session, _worker_id must have been an index level.
The visual difference can be somewhat subtle. For example, below, df has a
_worker_id column while df2 has a _worker_id index level:
In [190]: df = pd.DataFrame({'foo':[1,2,3], '_worker_id':list('ABC')}); df
Out[190]:
_worker_id foo
0 A 1
1 B 2
2 C 3
In [191]: df2 = df.set_index('_worker_id', append=True); df2
Out[191]:
foo
_worker_id
0 A 1
1 B 2
2 C 3
Notice that the name _worker_id appears one line below foo when it is an
index level, and on the same line as foo when it is a column. That is the only
visual clue you get when looking at the str or repr of a DataFrame.
So to repeat: When _worker_index is a column, the column is unaffected by
df.reset_index(drop=True):
In [194]: df.reset_index(drop=True)
Out[194]:
_worker_id foo
0 A 1
1 B 2
2 C 3
But _worker_index is dropped when it is part of the index:
In [195]: df2.reset_index(drop=True)
Out[195]:
foo
0 1
1 2
2 3
How can I find the row for which the value of a specific column is maximal?
df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.
Use the pandas idxmax function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations.
idxmax() returns indices labels, not integers.
Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax() used to be called argmax() prior to 0.11
argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
argmax function returned the integer position within the index of the row location of the maximum element.
pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
You might also try idxmax:
In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])
In [6]: df
Out[6]:
A B C
0 2.001289 0.482561 1.579985
1 -0.991646 -0.387835 1.320236
2 0.143826 -1.096889 1.486508
3 -0.193056 -0.499020 1.536540
4 -2.083647 -3.074591 0.175772
5 -0.186138 -1.949731 0.287432
6 -0.480790 -1.771560 -0.930234
7 0.227383 -0.278253 2.102004
8 -0.002592 1.434192 -1.624915
9 0.404911 -2.167599 -0.452900
In [7]: df.idxmax()
Out[7]:
A 0
B 8
C 7
e.g.
In [8]: df.loc[df['A'].idxmax()]
Out[8]:
A 2.001289
B 0.482561
C 1.579985
Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function.
But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
df.iloc[df['columnX'].argmax()]
argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.
A more compact and readable solution using query() is like this:
import pandas as pd
df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
print(df)
# find row with maximum A
df.query('A == A.max()')
It also returns a DataFrame instead of Series, which would be handy for some use cases.
Very simple: we have df as below and we want to print a row with max value in C:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
If you want the entire row instead of just the id, you can use df.nlargest and pass in how many 'top' rows you want and you can also pass in for which column/columns you want it for.
df.nlargest(2,['A'])
will give you the rows corresponding to the top 2 values of A.
use df.nsmallest for min values.
The direct ".argmax()" solution does not work for me.
The previous example provided by #ely
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
returns the following message :
FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
So that my solution is :
df['A'].values.argmax()
mx.iloc[0].idxmax()
This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.
Considering this dataframe
[In]: df = pd.DataFrame(np.random.randn(4,3),columns=['A','B','C'])
[Out]:
A B C
0 -0.253233 0.226313 1.223688
1 0.472606 1.017674 1.520032
2 1.454875 1.066637 0.381890
3 -0.054181 0.234305 -0.557915
Assuming one want to know the rows where column "C" is max, the following will do the work
[In]: df[df['C']==df['C'].max()])
[Out]:
A B C
1 0.472606 1.017674 1.520032
The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:
max_row = df['A'].values.argmax()
or
import numpy as np
max_row = np.argmax(df['A'].values)
Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -finds max value location in terms of row
data.iloc() - returns the row
If there are ties in the maximum values, then idxmax returns the index of only the first max value. For example, in the following DataFrame:
A B C
0 1 0 1
1 0 0 1
2 0 0 0
3 0 1 1
4 1 0 0
idxmax returns
A 0
B 3
C 0
dtype: int64
Now, if we want all indices corresponding to max values, then we could use max + eq to create a boolean DataFrame, then use it on df.index to filter out indexes:
out = df.eq(df.max()).apply(lambda x: df.index[x].tolist())
Output:
A [0, 4]
B [3]
C [0, 1, 3]
dtype: object
what worked for me is:
df[df['colX'] == df['colX'].max()
You then get the row in your df with the maximum value of colX.
Then if you just want the index you can add .index at the end of the query.
What is the right way to multiply two sorted pandas Series?
When I run the following
import pandas as pd
x = pd.Series([1,3,2])
x.sort()
print(x)
w = [1]*3
print(w*x)
I get what I would expect - [1,2,3]
However, when I change it to a Series:
w = pd.Series(w)
print(w*x)
It appears to multiply based on the index of the two series, so it returns [1,3,2]
Your results are essentially the same, just sorted differently.
>>> w*x
0 1
2 2
1 3
>>> pd.Series(w)*x
0 1
1 3
2 2
>>> (w*x).sort_index()
0 1
1 3
2 2
The rule is basically this: Anytime you multiply a dataframe or series by a dataframe or series, it will be done by index. That's what makes it pandas and not numpy. As a result, any pre-sorting is necessarily ignored.
But if you multiply a dataframe or series by a list or numpy array of a conforming shape/size, then the list or array will be treated as having the exact same index as the dataframe or series. The pre-sorting of the series or dataframe can be preserved in this case because there can not be any conflict with the list or array (which don't have an index at all).
Both of these types of behavior can be very desirable depending on what you are trying do. That's why you will often see answers here that do something like df1 * df2.values when the second type of behavior is desired.
In this example, it doesn't really matter because your list is [1,1,1] and gives the same answer either way, but if it was [1,2,3] you would get different answers, not just differently sorted answers.