How to structure vlookup calculations in python? - python-3.x

I am having trouble , understanding how to go about structuring an excel model that is basically a bunch of top-down customer calculations, in python.
Excel calcs
excel model consists of many worksheets that look up a bunch of values and from different worksheets and perform calculations on a customer level.
each customer starts of with an amount, a starting year ,an end year, and a starting state.
example of calculation in excel:
customer 1:
amount 100 , at starting state B , and starting year 3.
Multiplied by matrix (worksheet1)
the matrix consists of 10 3d arrays with states 5, (A-E).Each of the 10 3d arrays represent a year (1-10)
I multiply the amount 100 by the matrix at year and get an array, [800,650,400,300,840]
I then take this array and do another vlookup calculation, from another worksheet.
example limits (worksheet2). Which consists of years and limit %.
Year|Limit%
| 0.32
| 0.23
| 0.11
| 0.21
I vlook-up the customers year , year 3 in this case and then multiply [800,650,400,300,840] * 0.11
I then need to do a few more vlookup calculations like the one above.
after that
I need to multiply the result by the matrix at year 4, then do the vlookup calcs for year 4 like i did year 3, and basically continue until year 10 is reached.

It is very difficult to understand what the data looks like form your description. However, I would suggest creating pd.DataFrame and pd.Series of the constant data, with the identifier as the index value. Then you can use .loc() to retrieve the relevant row and use this data. If you returned this into your function until n = 10, you could calculate then return the final value.
A simple example that might help you start:
matrix1 has columns as states ("A" and "B"), rows as years (1, 2, 3, 4, 5).
matrix2 has rows as year (1, 2, 3, 4, 5), and the data is the "limit%".
Code:
import pandas as pd
# matrix1 = pd.DataFrame({'A': {1: 1, 2: 2, 3: 3, 4: 4, 5: 5},
# 'B': {1: 2, 2: 3, 3: 4, 4: 5, 5: 6}})
matrix1 = pd.DataFrame(data=np.random.rand(5, 5, 5).tolist(),
columns=['A', 'B', 'C', 'D', 'E'],
index=[1, 2, 3, 4, 5])
matrix2 = pd.Series({1: 0.32, 2: 0.23, 3: 0.11, 4: 0.21, 5: 0.2})
def calculation(amount, year, starting_state, n, calcs={}):
"""
This is the function that calculates everything.
:input amount: initial amount as int
:input year: starting year as int
:input starting_state: state in ["A", "B"] as str
:input n: (end_year - starting_year) where end_year <= 5 and n > 0 as int
:input calcs: dictionary of calculations, starting as empty as dictionary
:return: final amount
"""
#when there are no more years remaining
if n == 0:
#return amount
return calcs
#multiply by matrix1
amount *= np.asarray(matrix1.loc[year, starting_state])
#multiply by matrix2
amount *= matrix2.loc[year]
# more calculations ...
#add end result to dictionary
calcs[year] = amount
#return the new data to the function
return calculation(amount, year+1, starting_state, n-1, calcs)
calculation(10, 2, "A", 5-2)
#Out: 1.27512
Running through the interations:
#10 * 2 = 20
#20 * 0.23 = 4.6
#return amount=4.6, year=3, "A", 3-1
#4.6 * 3 = 13.8
#13.8 * 0.11 = 1.518
#return amount=1.518, year=4, "A", 2-1
#1.518 * 4 = 6.072
#6.072 * 0.21 = 1.27512
#return amount=1.27512, year=5, "A", 1-1
#as n in now 0
#return 1.27512
If you have an initial dataframe with the input data, you can then append the end result to the data:
people = pd.DataFrame({"amount": [10, 11, 12, 13],
"starting_year": [2, 2, 1, 3],
"end_year": [5, 5, 4, 5],
"state": ["A", "A", "B", "A"]})
people["output"] = people.apply(lambda x: calculation(
x["amount"], x["starting_year"],
x["state"], x["end_year"] - x["starting_year"]), axis=1)
people
#Out:
# amount starting_year end_year state output
#0 10 2 5 A 1.275120
#1 11 2 5 A 1.402632
#2 12 1 4 B 2.331648
#3 13 3 5 A 3.603600
EDIT
The changes made to reflect your additions:
matrix1 is now a 3-dimensional array to dataframe. This has changed the creation of the dataframe (obviously), and the multiplication of matrix1 needs to convert the returned list to an np.array so that it can be multiplied.
There is now an additional input to the function (which has a default of {} is no argument is passed, which is what you would want at the start. Then in the last line of the function before the return I have added calcs[year] = amount which appends the year and the array for that year to the dictionary. This means that the output of running for people is now a column of dictionaries. If you want to expand this to columns for each year, you can add a line afterwards: people = pd.concat([people, people["output"].apply(pd.Series)], axis=1).

Related

How to find 90th% value?

I have this list:
def grades = [5,4,3,2,1,1]
Where index is a grade, and value is an occurrence of the grade:
Grade
Occurrence
0
5
1
4
2
3
3
2
4
1
5
1
How can I calculate the 90th percentile for the grades?
This gets me a whole grade. I was hoping to find an exact value of 90th percentile, but this will do for me.
def grades = [5,4,3,2,1,1]
sum=grades.sum()
per_grade=0
per_value=sum*0.9
grades.eachWithIndex { grade_count, grade ->
per_value-=grade_count
if (per_value<0){per_grade=grade-1}
if (per_value==0){per_grade=grade}
}
out.write(per_grade)
A total of 16 grades have been given. 90% are 14.4 grades, so discard the lowest 14 grades and take the smallest remaining (in your example it will be 4).
How to code? There are some ways:
You may count through the array you have got. Subtract 5 from 14 (= 9), then 4, then 3, then 2. Once you reach zero, you’re at the index of the 90th percentile.
Maybe easier to understand, but will require a few more code lines: Put all 16 grades into an array (or list): [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 5]. Since the array is sorted, the median is found at index 14.

Averaging n elements along 1st axis of 4D array with numpy

I have a 4D array containing daily time-series of gridded data for different years with shape (year, day, x-coordinate, y-coordinate). The actual shape of my array is (19, 133, 288, 620), so I have 19 years of data with 133 days per year over a 288 x 620 grid. I want to take the weekly average of each grid cell over the period of record. The shape of the weekly averaged array should be (19, 19, 288, 620), or (year, week, x-coordinate, y-coordinate). I would like to use numpy to achieve this.
Here I construct some dummy data to work with and an array of what the solution should be:
import numpy as np
a1 = np.arange(1, 10).reshape(3, 3)
a1days = np.repeat(a1[np.newaxis, ...], 7, axis=0)
b1 = np.arange(10, 19).reshape(3, 3)
b1days = np.repeat(b1[np.newaxis, ...], 7, axis=0)
c1year = np.concatenate((a1days, b1days), axis=0)
a2 = np.arange(19, 28).reshape(3, 3)
a2days = np.repeat(a2[np.newaxis, ...], 7, axis=0)
b2 = np.arange(29, 38).reshape(3, 3)
b2days = np.repeat(b2[np.newaxis, ...], 7, axis=0)
c2year = np.concatenate((a2days, b2days), axis=0)
dummy_data = np.concatenate((c1year, c2year), axis=0).reshape(2, 14, 3, 3)
solution = np.concatenate((a1, b1, a2, b2), axis=0).reshape(2, 2, 3, 3)
The shape of the dummy_data is (2, 14, 3, 3). Per the dummy data, I have two years of data, 14 days per year, over a 3 X 3 grid. I want to return the weekly average of the grid for both years, resulting in a solution with shape (2, 2, 3, 3).
You can reshape and take mean:
week_mean = dummy_data.reshape(2,-1,7,3,3).mean(axis=2)
# in your case .reshape(year, -1, 7, x_coord, y_coord)
# check:
(dummy_data.reshape(2,2,7,3,3).mean(axis=2) == solution).all()
# True

Adding a row to existing dataframe [duplicate]

How do I create an empty DataFrame, then add rows, one by one?
I created an empty DataFrame:
df = pd.DataFrame(columns=('lib', 'qty1', 'qty2'))
Then I can add a new row at the end and fill a single field with:
df = df._set_value(index=len(df), col='qty1', value=10.0)
It works for only one field at a time. What is a better way to add new row to df?
You can use df.loc[i], where the row with index i will be what you specify it to be in the dataframe.
>>> import pandas as pd
>>> from numpy.random import randint
>>> df = pd.DataFrame(columns=['lib', 'qty1', 'qty2'])
>>> for i in range(5):
>>> df.loc[i] = ['name' + str(i)] + list(randint(10, size=2))
>>> df
lib qty1 qty2
0 name0 3 3
1 name1 2 4
2 name2 2 8
3 name3 2 1
4 name4 9 6
In case you can get all data for the data frame upfront, there is a much faster approach than appending to a data frame:
Create a list of dictionaries in which each dictionary corresponds to an input data row.
Create a data frame from this list.
I had a similar task for which appending to a data frame row by row took 30 min, and creating a data frame from a list of dictionaries completed within seconds.
rows_list = []
for row in input_rows:
dict1 = {}
# get input row in dictionary format
# key = col_name
dict1.update(blah..)
rows_list.append(dict1)
df = pd.DataFrame(rows_list)
In the case of adding a lot of rows to dataframe, I am interested in performance. So I tried the four most popular methods and checked their speed.
Performance
Using .append (NPE's answer)
Using .loc (fred's answer)
Using .loc with preallocating (FooBar's answer)
Using dict and create DataFrame in the end (ShikharDua's answer)
Runtime results (in seconds):
Approach
1000 rows
5000 rows
10 000 rows
.append
0.69
3.39
6.78
.loc without prealloc
0.74
3.90
8.35
.loc with prealloc
0.24
2.58
8.70
dict
0.012
0.046
0.084
So I use addition through the dictionary for myself.
Code:
import pandas as pd
import numpy as np
import time
del df1, df2, df3, df4
numOfRows = 1000
# append
startTime = time.perf_counter()
df1 = pd.DataFrame(np.random.randint(100, size=(5,5)), columns=['A', 'B', 'C', 'D', 'E'])
for i in range( 1,numOfRows-4):
df1 = df1.append( dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E']), ignore_index=True)
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df1.shape)
# .loc w/o prealloc
startTime = time.perf_counter()
df2 = pd.DataFrame(np.random.randint(100, size=(5,5)), columns=['A', 'B', 'C', 'D', 'E'])
for i in range( 1,numOfRows):
df2.loc[i] = np.random.randint(100, size=(1,5))[0]
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df2.shape)
# .loc with prealloc
df3 = pd.DataFrame(index=np.arange(0, numOfRows), columns=['A', 'B', 'C', 'D', 'E'] )
startTime = time.perf_counter()
for i in range( 1,numOfRows):
df3.loc[i] = np.random.randint(100, size=(1,5))[0]
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df3.shape)
# dict
startTime = time.perf_counter()
row_list = []
for i in range (0,5):
row_list.append(dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E']))
for i in range( 1,numOfRows-4):
dict1 = dict( (a,np.random.randint(100)) for a in ['A','B','C','D','E'])
row_list.append(dict1)
df4 = pd.DataFrame(row_list, columns=['A','B','C','D','E'])
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df4.shape)
P.S.: I believe my realization isn't perfect, and maybe there is some optimization that could be done.
You could use pandas.concat(). For details and examples, see Merge, join, and concatenate.
For example:
def append_row(df, row):
return pd.concat([
df,
pd.DataFrame([row], columns=row.index)]
).reset_index(drop=True)
df = pd.DataFrame(columns=('lib', 'qty1', 'qty2'))
new_row = pd.Series({'lib':'A', 'qty1':1, 'qty2': 2})
df = append_row(df, new_row)
NEVER grow a DataFrame!
Yes, people have already explained that you should NEVER grow a DataFrame, and that you should append your data to a list and convert it to a DataFrame once at the end. But do you understand why?
Here are the most important reasons, taken from my post here.
It is always cheaper/faster to append to a list and create a DataFrame in one go.
Lists take up less memory and are a much lighter data structure to work with, append, and remove.
dtypes are automatically inferred for your data. On the flip side, creating an empty frame of NaNs will automatically make them object, which is bad.
An index is automatically created for you, instead of you having to take care to assign the correct index to the row you are appending.
This is The Right Way™ to accumulate your data
data = []
for a, b, c in some_function_that_yields_data():
data.append([a, b, c])
df = pd.DataFrame(data, columns=['A', 'B', 'C'])
These options are horrible
append or concat inside a loop
append and concat aren't inherently bad in isolation. The
problem starts when you iteratively call them inside a loop - this
results in quadratic memory usage.
# Creates empty DataFrame and appends
df = pd.DataFrame(columns=['A', 'B', 'C'])
for a, b, c in some_function_that_yields_data():
df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True)
# This is equally bad:
# df = pd.concat(
# [df, pd.Series({'A': i, 'B': b, 'C': c})],
# ignore_index=True)
Empty DataFrame of NaNs
Never create a DataFrame of NaNs as the columns are initialized with
object (slow, un-vectorizable dtype).
# Creates DataFrame of NaNs and overwrites values.
df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5))
for a, b, c in some_function_that_yields_data():
df.loc[len(df)] = [a, b, c]
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
It's posts like this that remind me why I'm a part of this community. People understand the importance of teaching folks getting the right answer with the right code, not the right answer with wrong code. Now you might argue that it is not an issue to use loc or append if you're only adding a single row to your DataFrame. However, people often look to this question to add more than just one row - often the requirement is to iteratively add a row inside a loop using data that comes from a function (see related question). In that case it is important to understand that iteratively growing a DataFrame is not a good idea.
If you know the number of entries ex ante, you should preallocate the space by also providing the index (taking the data example from a different answer):
import pandas as pd
import numpy as np
# we know we're gonna have 5 rows of data
numberOfRows = 5
# create dataframe
df = pd.DataFrame(index=np.arange(0, numberOfRows), columns=('lib', 'qty1', 'qty2') )
# now fill it up row by row
for x in np.arange(0, numberOfRows):
#loc or iloc both work here since the index is natural numbers
df.loc[x] = [np.random.randint(-1,1) for n in range(3)]
In[23]: df
Out[23]:
lib qty1 qty2
0 -1 -1 -1
1 0 0 0
2 -1 0 -1
3 0 -1 0
4 -1 0 0
Speed comparison
In[30]: %timeit tryThis() # function wrapper for this answer
In[31]: %timeit tryOther() # function wrapper without index (see, for example, #fred)
1000 loops, best of 3: 1.23 ms per loop
100 loops, best of 3: 2.31 ms per loop
And - as from the comments - with a size of 6000, the speed difference becomes even larger:
Increasing the size of the array (12) and the number of rows (500) makes
the speed difference more striking: 313ms vs 2.29s
mycolumns = ['A', 'B']
df = pd.DataFrame(columns=mycolumns)
rows = [[1,2],[3,4],[5,6]]
for row in rows:
df.loc[len(df)] = row
You can append a single row as a dictionary using the ignore_index option.
>>> f = pandas.DataFrame(data = {'Animal':['cow','horse'], 'Color':['blue', 'red']})
>>> f
Animal Color
0 cow blue
1 horse red
>>> f.append({'Animal':'mouse', 'Color':'black'}, ignore_index=True)
Animal Color
0 cow blue
1 horse red
2 mouse black
For efficient appending, see How to add an extra row to a pandas dataframe and Setting With Enlargement.
Add rows through loc/ix on non existing key index data. For example:
In [1]: se = pd.Series([1,2,3])
In [2]: se
Out[2]:
0 1
1 2
2 3
dtype: int64
In [3]: se[5] = 5.
In [4]: se
Out[4]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
Or:
In [1]: dfi = pd.DataFrame(np.arange(6).reshape(3,2),
.....: columns=['A','B'])
.....:
In [2]: dfi
Out[2]:
A B
0 0 1
1 2 3
2 4 5
In [3]: dfi.loc[:,'C'] = dfi.loc[:,'A']
In [4]: dfi
Out[4]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
In [5]: dfi.loc[3] = 5
In [6]: dfi
Out[6]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
For the sake of a Pythonic way:
res = pd.DataFrame(columns=('lib', 'qty1', 'qty2'))
res = res.append([{'qty1':10.0}], ignore_index=True)
print(res.head())
lib qty1 qty2
0 NaN 10.0 NaN
You can also build up a list of lists and convert it to a dataframe -
import pandas as pd
columns = ['i','double','square']
rows = []
for i in range(6):
row = [i, i*2, i*i]
rows.append(row)
df = pd.DataFrame(rows, columns=columns)
giving
i double square
0 0 0 0
1 1 2 1
2 2 4 4
3 3 6 9
4 4 8 16
5 5 10 25
If you always want to add a new row at the end, use this:
df.loc[len(df)] = ['name5', 9, 0]
I figured out a simple and nice way:
>>> df
A B C
one 1 2 3
>>> df.loc["two"] = [4,5,6]
>>> df
A B C
one 1 2 3
two 4 5 6
Note the caveat with performance as noted in the comments.
This is not an answer to the OP question, but a toy example to illustrate ShikharDua's answer which I found very useful.
While this fragment is trivial, in the actual data I had 1,000s of rows, and many columns, and I wished to be able to group by different columns and then perform the statistics below for more than one target column. So having a reliable method for building the data frame one row at a time was a great convenience. Thank you ShikharDua!
import pandas as pd
BaseData = pd.DataFrame({ 'Customer' : ['Acme','Mega','Acme','Acme','Mega','Acme'],
'Territory' : ['West','East','South','West','East','South'],
'Product' : ['Econ','Luxe','Econ','Std','Std','Econ']})
BaseData
columns = ['Customer','Num Unique Products', 'List Unique Products']
rows_list=[]
for name, group in BaseData.groupby('Customer'):
RecordtoAdd={} #initialise an empty dict
RecordtoAdd.update({'Customer' : name}) #
RecordtoAdd.update({'Num Unique Products' : len(pd.unique(group['Product']))})
RecordtoAdd.update({'List Unique Products' : pd.unique(group['Product'])})
rows_list.append(RecordtoAdd)
AnalysedData = pd.DataFrame(rows_list)
print('Base Data : \n',BaseData,'\n\n Analysed Data : \n',AnalysedData)
You can use a generator object to create a Dataframe, which will be more memory efficient over the list.
num = 10
# Generator function to generate generator object
def numgen_func(num):
for i in range(num):
yield ('name_{}'.format(i), (i*i), (i*i*i))
# Generator expression to generate generator object (Only once data get populated, can not be re used)
numgen_expression = (('name_{}'.format(i), (i*i), (i*i*i)) for i in range(num) )
df = pd.DataFrame(data=numgen_func(num), columns=('lib', 'qty1', 'qty2'))
To add raw to existing DataFrame you can use append method.
df = df.append([{ 'lib': "name_20", 'qty1': 20, 'qty2': 400 }])
Instead of a list of dictionaries as in ShikharDua's answer (row-based), we can also represent our table as a dictionary of lists (column-based), where each list stores one column in row-order, given we know our columns beforehand. At the end we construct our DataFrame once.
In both cases, the dictionary keys are always the column names. Row order is stored implicitly as order in a list. For c columns and n rows, this uses one dictionary of c lists, versus one list of n dictionaries. The list-of-dictionaries method has each dictionary storing all keys redundantly and requires creating a new dictionary for every row. Here we only append to lists, which overall is the same time complexity (adding entries to list and dictionary are both amortized constant time) but may have less overhead due to being a simple operation.
# Current data
data = {"Animal":["cow", "horse"], "Color":["blue", "red"]}
# Adding a new row (be careful to ensure every column gets another value)
data["Animal"].append("mouse")
data["Color"].append("black")
# At the end, construct our DataFrame
df = pd.DataFrame(data)
# Animal Color
# 0 cow blue
# 1 horse red
# 2 mouse black
Create a new record (data frame) and add to old_data_frame.
Pass a list of values and the corresponding column names to create a new_record (data_frame):
new_record = pd.DataFrame([[0, 'abcd', 0, 1, 123]], columns=['a', 'b', 'c', 'd', 'e'])
old_data_frame = pd.concat([old_data_frame, new_record])
Here is the way to add/append a row in a Pandas DataFrame:
def add_row(df, row):
df.loc[-1] = row
df.index = df.index + 1
return df.sort_index()
add_row(df, [1,2,3])
It can be used to insert/append a row in an empty or populated Pandas DataFrame.
If you want to add a row at the end, append it as a list:
valuestoappend = [va1, val2, val3]
res = res.append(pd.Series(valuestoappend, index = ['lib', 'qty1', 'qty2']), ignore_index = True)
Another way to do it (probably not very performant):
# add a row
def add_row(df, row):
colnames = list(df.columns)
ncol = len(colnames)
assert ncol == len(row), "Length of row must be the same as width of DataFrame: %s" % row
return df.append(pd.DataFrame([row], columns=colnames))
You can also enhance the DataFrame class like this:
import pandas as pd
def add_row(self, row):
self.loc[len(self.index)] = row
pd.DataFrame.add_row = add_row
All you need is loc[df.shape[0]] or loc[len(df)]
# Assuming your df has 4 columns (str, int, str, bool)
df.loc[df.shape[0]] = ['col1Value', 100, 'col3Value', False]
or
df.loc[len(df)] = ['col1Value', 100, 'col3Value', False]
You can concatenate two DataFrames for this. I basically came across this problem to add a new row to an existing DataFrame with a character index (not numeric).
So, I input the data for a new row in a duct() and index in a list.
new_dict = {put input for new row here}
new_list = [put your index here]
new_df = pd.DataFrame(data=new_dict, index=new_list)
df = pd.concat([existing_df, new_df])
initial_data = {'lib': np.array([1,2,3,4]), 'qty1': [1,2,3,4], 'qty2': [1,2,3,4]}
df = pd.DataFrame(initial_data)
df
lib qty1 qty2
0 1 1 1
1 2 2 2
2 3 3 3
3 4 4 4
val_1 = [10]
val_2 = [14]
val_3 = [20]
df.append(pd.DataFrame({'lib': val_1, 'qty1': val_2, 'qty2': val_3}))
lib qty1 qty2
0 1 1 1
1 2 2 2
2 3 3 3
3 4 4 4
0 10 14 20
You can use a for loop to iterate through values or can add arrays of values.
val_1 = [10, 11, 12, 13]
val_2 = [14, 15, 16, 17]
val_3 = [20, 21, 22, 43]
df.append(pd.DataFrame({'lib': val_1, 'qty1': val_2, 'qty2': val_3}))
lib qty1 qty2
0 1 1 1
1 2 2 2
2 3 3 3
3 4 4 4
0 10 14 20
1 11 15 21
2 12 16 22
3 13 17 43
Make it simple. By taking a list as input which will be appended as a row in the data-frame:
import pandas as pd
res = pd.DataFrame(columns=('lib', 'qty1', 'qty2'))
for i in range(5):
res_list = list(map(int, input().split()))
res = res.append(pd.Series(res_list, index=['lib', 'qty1', 'qty2']), ignore_index=True)
pandas.DataFrame.append
DataFrame.append(self, other, ignore_index=False, verify_integrity=False, sort=False) → 'DataFrame'
Code
df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'))
df.append(df2)
With ignore_index set to True:
df.append(df2, ignore_index=True)
If you have a data frame df and want to add a list new_list as a new row to df, you can simply do:
df.loc[len(df)] = new_list
If you want to add a new data frame new_df under data frame df, then you can use:
df.append(new_df)
We often see the construct df.loc[subscript] = … to assign to one DataFrame row. Mikhail_Sam posted benchmarks containing, among others, this construct as well as the method using dict and create DataFrame in the end. He found the latter to be the fastest by far.
But if we replace the df3.loc[i] = … (with preallocated DataFrame) in his code with df3.values[i] = …, the outcome changes significantly, in that that method performs similar to the one using dict. So we should more often take the use of df.values[subscript] = … into consideration. However note that .values takes a zero-based subscript, which may be different from the DataFrame.index.
Before going to add a row, we have to convert the dataframe to a dictionary. There you can see the keys as columns in the dataframe and the values of the columns are again stored in the dictionary, but there the key for every column is the index number in the dataframe.
That idea makes me to write the below code.
df2 = df.to_dict()
values = ["s_101", "hyderabad", 10, 20, 16, 13, 15, 12, 12, 13, 25, 26, 25, 27, "good", "bad"] # This is the total row that we are going to add
i = 0
for x in df.columns: # Here df.columns gives us the main dictionary key
df2[x][101] = values[i] # Here the 101 is our index number. It is also the key of the sub dictionary
i += 1
If all data in your Dataframe has the same dtype you might use a NumPy array. You can write rows directly into the predefined array and convert it to a dataframe at the end.
It seems to be even faster than converting a list of dicts.
import pandas as pd
import numpy as np
from string import ascii_uppercase
startTime = time.perf_counter()
numcols, numrows = 5, 10000
npdf = np.ones((numrows, numcols))
for row in range(numrows):
npdf[row, 0:] = np.random.randint(0, 100, (1, numcols))
df5 = pd.DataFrame(npdf, columns=list(ascii_uppercase[:numcols]))
print('Elapsed time: {:6.3f} seconds for {:d} rows'.format(time.perf_counter() - startTime, numOfRows))
print(df5.shape)
This code snippet uses a list of dictionaries to update the data frame. It adds on to ShikharDua's and Mikhail_Sam's answers.
import pandas as pd
colour = ["red", "big", "tasty"]
fruits = ["apple", "banana", "cherry"]
dict1={}
feat_list=[]
for x in colour:
for y in fruits:
# print(x, y)
dict1 = dict([('x',x),('y',y)])
# print(f'dict 1 {dict1}')
feat_list.append(dict1)
# print(f'feat_list {feat_list}')
feat_df=pd.DataFrame(feat_list)
feat_df.to_csv('feat1.csv')

how to get a kind of "maximum" in a matrix, efficiently

I have the following problem: I have a matrix opened with pandas module, where each cell has a number between -1 and 1. What I wanted to find is the maximum "posible" value in a row that is also not the maximum value in another row.
If for example 2 rows has their maximum value at the same column, I compare both values and take the bigger one, then for the row that has its maximum value smaller that the other row, I took the second maximum value (and do the same analysis again and again).
To explain myself better consider my code
import pandas as pd
matrix = pd.read_csv("matrix.csv")
# this matrix has an id (or name) for each column
# ... and the firt column has the id of each row
results = pd.DataFrame(np.empty((len(matrix),3),dtype=pd.Timestamp),columns=['id1','id2','max_pos'])
l = len(matrix.col[[0]]) # number of columns
while next = 1:
next = 0
for i in range(0, len(matrix)):
max_column = str(0)
for j in range(1, l): # 1 because the first column is an id
if matrix[max_column][i] < matrix[str(j)][i]:
max_column = str(j)
results['id1'][i] = str(i) # I coul put here also matrix['0'][i]
results['id2'][i] = max_column
results['max_pos'][i] = matrix[max_column][i]
for i in range(0, len(results)): #now I will check if two or more rows have the same max column
for ii in range(0, len(results)):
# if two id1 has their max in the same column, I keep it with the biggest
# ... max value and chage the other to "-1" to iterate again
if (results['id2'][i] == results['id2'][ii]) and (results['max_pos'][i] < results['max_pos'][ii]):
matrix[results['id2'][i]][i] = -1
next = 1
Putting an example:
#consider
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[4, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 4 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#at the first iterarion I will have the following result
0 b 4 # this means that the row 0 has its maximum at column 'b' and its value is 4
1 b 5
2 a 5
3 c 2
#the problem is that column b is the maximum of row 0 and 1, but I know that the maximum of row 1 is bigger than row 0, so I take the second maximum of row 0, then:
0 c 3
1 b 5
2 a 5
3 c 2
#now I solved the problem for row 0 and 1, but I have that the column c is the maximum of row 0 and 3, so I compare them and take the second maximum in row 3
0 c 3
1 b 5
2 a 5
3 d 1
#now I'm done. In the case that two rows have the same column as maximum and also the same number, nothing happens and I keep with that values.
#what if the matrix would be
pd.DataFrame({'a':[1, 2, 5, 0], 'b':[5, 5, 1, 0], 'c':[3, 3, 4, 2], 'd':[1, 0, 0, 1]})
a b c d
0 1 5 3 1
1 2 5 3 0
2 5 1 4 0
3 0 0 2 1
#then, at the first itetarion the result will be:
0 b 5
1 b 5
2 a 5
3 c 2
#then, given that the max value of row 0 and 1 is at the same column, I should compare the maximum values
# ... but in this case the values are the same (both are 5), this would be the end of iterating
# ... because I can't choose between row 0 and 1 and the other rows have their maximum at different columns...
This code works perfect to me if I have a matrix of 100x100 for example. But, if the matrix size goes to 50,000x50,000 the code takes to much time in finish it. I now that my code could be the most inneficient way to do it, but I don't know how to deal with this.
I have been reading about threads in python that could help but it doesn't help if I put 50,000 threads because my computer doesn't use more CPU. I also tried to use some functions as .max() but I'm not able to get column of the max an compare it with the other max ...
If anyone could help me of give me a piece of advice to make this more efficient I would be very grateful.
Going to need more information on this. What are you trying to accomplish here?
This will help you get some of the way, but in order to fully achieve what you're doing I need more context.
We'll import numpy, random, and Counter from collections:
import numpy as np
import random
from collections import Counter
We'll create a random 50k x 50k matrix of numbers between -10M and +10M
mat = np.random.randint(-10000000,10000000,(50000,50000))
Now to get the maximums for each row we can just do the following list comprehension:
maximums = [max(mat[x,:]) for x in range(len(mat))]
Now we want to find out which ones are not maximums in any other rows. We can use Counter on our maximums list to find out how many of each there are. Counter returns a counter object that is like a dictionary with the maximum as the key, and the # of times it appears as the value.
We then do dictionary comprehension where the value is == to 1. That will give us the maximums that only show up once. we use the .keys() function to grab the numbers themselves, and then turn it into a list.
c = Counter(maximums)
{9999117: 15,
9998584: 2,
9998352: 2,
9999226: 22,
9999697: 59,
9999534: 32,
9998775: 8,
9999288: 18,
9998956: 9,
9998119: 1,
...}
k = list( {x: c[x] for x in c if c[x] == 1}.keys() )
[9998253,
9998139,
9998091,
9997788,
9998166,
9998552,
9997711,
9998230,
9998000,
...]
Lastly we can do the following list comprehension to iterate through the original maximums list to get the indicies of where these rows are.
indices = [i for i, x in enumerate(maximums) if x in k]
Depending on what else you're looking to do we can go from here.
Its not the speediest program but finding the maximums, the counter, and the indicies takes 182 seconds on a 50,000 by 50,000 matrix that is already loaded.

Trouble printing specific columns with pandas dataframes using .iloc

I'm creating a 4x4 dataframe with pandas and trying to print the last 2 columns with all row data using print(df.iloc[:][2:]) however it is printing the last two rows and all columns - the same as print(df.iloc[2:][:]). Am I misunderstanding how the console interprets the brackets and colons?
Here is the code I'm executing:
import pandas as pd
import numpy as np
data1 = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12],
[13, 14, 15, 16]])
index = ['Worst', 'Index', 'Ever', 'Dude']
columns = ['Best', 'Columns', 'Today', 'Bro']
sick_df = pd.DataFrame(data = data1, index = index, columns = columns)
print(sick_df)
print('\n', sick_df.iloc[:][2:])
Here is the output from the above code:
Best Columns Today Bro
Worst 1 2 3 4
Index 5 6 7 8
Ever 9 10 11 12
Dude 13 14 15 16
Best Columns Today Bro
Ever 9 10 11 12
Dude 13 14 15 16
I was expecting the 2nd print method to display all four rows with the last two columns. This output is what I expect to get from print('\n', sick_df.iloc[2:][:]) and indeed when I change the 2nd print method to this line I get the same exact output.
The correct syntax of iloc and loc is [row index, col index]
sick_df.iloc[:, -2:]
The reason your code is returning a different result is due to chaining,
sick_df.iloc[:]
returns the entire dataframe. Now when you chain that with
sick_df.iloc[:][2:]
You get all the rows from index 2 till the end of the data frame.

Resources