fast date based replacement of rows in Pandas - python-3.x

I am on a quest of finding the fastest replacement method based on index in Pandas.
I want to fill np.nans to all rows based on index (DateTimeIndex).
I tested various types of selection, but obviously, the bottleneck is setting the rows equal to a value (np.nan in my case).
Naively, I want to do this:
df['2017-01-01':'2018-01-01'] = np.nan
I tried and tested a performance of various other methods, such as
df.loc['2017-01-01':'2018-01-01'] = np.nan
And also creating a mask with NumPy to speed it up
df['DateTime'] = df.index
st = pd.to_datetime('2017-01-01', format='%Y-%m-%d').to_datetime64()
en = pd.to_datetime('2018-01-01', format='%Y-%m-%d').to_datetime64()
ge_start = df['DateTime'] >= st
le_end = df['DateTime'] <= en
mask = (ge_start & le_end )
and then
df[mask] = np.nan
#or
df.where(~mask)
But with no big success. I have DataFrame (that I cannot share unfortunately) of size cca (200,1500000), so kind of big, and the operation takes order of seconds of CPU time, which is way too much imo.
Would appreciate any ideas!
edit: after going through
Modifying a subset of rows in a pandas dataframe and Why dataframe.values is very slow and unifying datatypes for the operation, the problem is solved with cca 20x speedup.

Related

Trying to use pandas to group all data in Column B [duplicate]

I have a large (about 12M rows) DataFrame df:
df.columns = ['word','documents','frequency']
The following ran in a timely fashion:
word_grouping = df[['word','frequency']].groupby('word')
MaxFrequency_perWord = word_grouping[['frequency']].max().reset_index()
MaxFrequency_perWord.columns = ['word','MaxFrequency']
However, this is taking an unexpectedly long time to run:
Occurrences_of_Words = word_grouping[['word']].count().reset_index()
What am I doing wrong here? Is there a better way to count occurrences in a large DataFrame?
df.word.describe()
ran pretty well, so I really did not expect this Occurrences_of_Words DataFrame to take very long to build.
I think df['word'].value_counts() should serve. By skipping the groupby machinery, you'll save some time. I'm not sure why count should be much slower than max. Both take some time to avoid missing values. (Compare with size.)
In any case, value_counts has been specifically optimized to handle object type, like your words, so I doubt you'll do much better than that.
When you want to count the frequency of categorical data in a column in pandas dataFrame use: df['Column_Name'].value_counts()
-Source.
Just an addition to the previous answers. Let's not forget that when dealing with real data there might be null values, so it's useful to also include those in the counting by using the option dropna=False (default is True)
An example:
>>> df['Embarked'].value_counts(dropna=False)
S 644
C 168
Q 77
NaN 2
Other possible approaches to count occurrences could be to use (i) Counter from collections module, (ii) unique from numpy library and (iii) groupby + size in pandas.
To use collections.Counter:
from collections import Counter
out = pd.Series(Counter(df['word']))
To use numpy.unique:
import numpy as np
i, c = np.unique(df['word'], return_counts = True)
out = pd.Series(c, index = i)
To use groupby + size:
out = pd.Series(df.index, index=df['word']).groupby(level=0).size()
One very nice feature of value_counts that's missing in the above methods is that it sorts the counts. If having the counts sorted is absolutely necessary, then value_counts is the best method given its simplicity and performance (even though it still gets marginally outperformed by other methods especially for very large Series).
Benchmarks
(if having the counts sorted is not important):
If we look at runtimes, it depends on the data stored in the DataFrame columns/Series.
If the Series is dtype object, then the fastest method for very large Series is collections.Counter, but in general value_counts is very competitive.
However, if it is dtype int, then the fastest method is numpy.unique:
Code used to produce the plots:
import perfplot
import numpy as np
import pandas as pd
from collections import Counter
def creator(n, dt='obj'):
s = pd.Series(np.random.randint(2*n, size=n))
return s.astype(str) if dt=='obj' else s
def plot_perfplot(datatype):
perfplot.show(
setup = lambda n: creator(n, datatype),
kernels = [lambda s: s.value_counts(),
lambda s: pd.Series(Counter(s)),
lambda s: pd.Series((ic := np.unique(s, return_counts=True))[1], index = ic[0]),
lambda s: pd.Series(s.index, index=s).groupby(level=0).size()
],
labels = ['value_counts', 'Counter', 'np_unique', 'groupby_size'],
n_range = [2 ** k for k in range(5, 25)],
equality_check = lambda *x: (d:= pd.concat(x, axis=1)).eq(d[0], axis=0).all().all(),
xlabel = '~len(s)',
title = f'dtype {datatype}'
)
plot_perfplot('obj')
plot_perfplot('int')

which is the efficient way to iterate in python?

I have to iterate one by one over 1 million records, which are stored in a list. And its value is present in a Pandas dataframe. I have to first find its value in the dataframe then perform some arthritic operation on it. And again store it in another Pandas dataframe. But it takes too much time to complete. So I have stored the value in a tuple and the performance has improved a bit but not as expected. Is there any way to optimize this?
Below is sample code I have done.
c2=['Fruits','animals',...]
list1=[]
for j in c2:
data2=dataframe.loc[(dataframe['value'] == j)]
data3=data2.describe()
range1=data3.loc['max']-data3.loc['min']
The most efficient way is to use vectorized functions. Typing this in the blind:
c2 = ['Fruits', 'animals', ...]
tmp = dataframe[dataframe['value'].isin(c2)] \
.groupby('value') \
.agg(['min', 'max'])
df_range = tmp['max'] - tmp['min']

How to query attribute table in QGIS to return the sum of a subset

I am using Jupyter notebook to query a dataset too big to be opened with excel and my pc is too slow to perform calculations directly on QGIS.
My logic is as follows, after imported pandas:
x = df[(df.OBJECTID == 4440) & (df.Landuse == 'Grass - Urban')]
want_area = x['Area'].sum() #returning the whole dataframe sum for that field!!
summed = x['Area'].sum()
ratio = round(want_area / summed, 2)
How can I tweak the code in order to obtain the sum of the 'Area' of the above subset only, and not for the whole dataframe (800+ thousand features)?
Hope my simple question makes sense and thank you very much!

Use data in Spark Dataframe column as condition or input in another column expression

I have an operation that I want to perform within PySpark 2.0 that would be easy to perform as a df.rdd.map, but since I would prefer to stay inside the Dataframe execution engine for performance reasons, I want to find a way to do this using Dataframe operations only.
The operation, in RDD-style, is something like this:
def precision_formatter(row):
formatter = "%.{}f".format(row.precision)
return row + [formatter % row.amount_raw / 10 ** row.precision]
df = df.rdd.map(precision_formatter)
Basically, I have a column that tells me, for each row, what the precision for my string formatting operation should be, and I want to selectively format the 'amount_raw' column as a string depending on that precision.
I don't know of a way to use the contents of one or more columns as input to another Column operation. The closest I can come is suggesting the use of Column.when with an externally-defined set of boolean operations that correspond to the set of possible boolean conditions/cases within the column or columns.
In this specific case, for instance, if you can obtain (or better yet, already have) all possible values of row.precision, then you can iterate over that set and apply a Column.when operation for each value in the set. I believe this set can be obtained with df.select('precision').distinct().collect().
Because the pyspark.sql.functions.when and Column.when operations themselves return a Column object, you can iterate over the items in the set (however it was obtained) and keep 'appending' when operations to each other programmatically until you have exhausted the set:
import pyspark.sql.functions as PSF
def format_amounts_with_precision(df, all_precisions_set):
amt_col = PSF.when(df['precision'] == 0, df['amount_raw'].cast(StringType()))
for precision in all_precisions_set:
if precision != 0: # this is a messy way of having a base case above
fmt_str = '%.{}f'.format(precision)
amt_col = amt_col.when(df['precision'] == precision,
PSF.format_string(fmt_str, df['amount_raw'] / 10 ** precision)
return df.withColumn('amount', amt_col)
You can do it with a python UDF. They can take as many input values (values from columns of a Row) and spit out a single output value. It would look something like this:
from pyspark.sql import types as T, functions as F
from pyspark.sql.function import udf, col
# Create example data frame
schema = T.StructType([
T.StructField('precision', T.IntegerType(), False),
T.StructField('value', T.FloatType(), False)
])
data = [
(1, 0.123456),
(2, 0.123456),
(3, 0.123456)
]
rdd = sc.parallelize(data)
df = sqlContext.createDataFrame(rdd, schema)
# Define UDF and apply it
def format_func(precision, value):
format_str = "{:." + str(precision) + "f}"
return format_str.format(value)
format_udf = F.udf(format_func, T.StringType())
new_df = df.withColumn('formatted', format_udf('precision', 'value'))
new_df.show()
Also, if instead of the column precision value you wanted to use a global one, you could use the lit(..) function when you call it like this:
new_df = df.withColumn('formatted', format_udf(F.lit(2), 'value'))

Index by date ranges in Python Pandas

I am new to using python pandas, and have the below script to pull in time series data from an excel file, set the dates = index, and then will want to perform various calculations on the data referencing by date. Script:
df = pd.read_excel("myfile.xls")
df = df.set_index(df.Date)
df = df.drop("Date",1)
df.index.name = None
df.head()
The output of that (to give you a sense of the data) is:
Px1 Px2 Px3 Px4 Px5 Px6 Px7
2015-08-12 19.850000 10.25 7.88 10.90 109.349998 106.650002 208.830002
2015-08-11 19.549999 10.16 7.81 10.88 109.419998 106.690002 208.660004
2015-08-10 19.260000 10.07 7.73 10.79 109.059998 105.989998 210.630005
2015-08-07 19.240000 10.08 7.69 10.92 109.199997 106.430000 207.919998
2015-08-06 19.250000 10.09 7.76 10.96 109.010002 106.010002 208.350006
When I try to retrieve data based on one date like df.loc['20150806'] that works, but when I try to retrieve a slice like df.loc['20150806':'20150812'] I return Empty DataFrame.
Again, the index is a DateTimeIndex with dtype = 'datetime64[ns]', length = 1412, freq = None, tz = None
Like I said, my ultimate goal is to be able to group the data by Day, Month, Year, different periods etc., and perform calculations on the data. I want to give that context, but don't even want to get into that here since I'm clearly stuck on something more basic - perhaps misunderstanding how to operate with a DateTimeIndex
Thank you.
EDIT: Meant to also include, I think the main problem I referenced with indexing has something to do with freq=0, bc when I tried simpler examples with contiguous date series, I did not have this problem.
df.loc['2015-08-12':'2015-08-10'] and df.loc['2015-08-10':'2015-08-12':-1] both work. df = df.sort_index() and slicing the way I was trying also works. Thank you all. Was missing the forest for the trees there I think.

Resources