pandas, how to get close price from returns? - python-3.x

I'm trying to convert from returns to a price index to simulate close prices for the ffn library, but without success.
import pandas as pd
times = pd.to_datetime(pd.Series(['2014-07-4',
'2014-07-15','2014-08-25','2014-08-25','2014-09-10','2014-09-15']))
strategypercentage = [0.01, 0.02, -0.03, 0.04,0.5,-0.3]
df = pd.DataFrame({'llt_return': strategypercentage}, index=times)
df['llt_close']=1
df['llt_close']=df['llt_close'].shift(1)*(1+df['llt_return'])
df.head(10)
llt_return llt_close
2014-07-04 0.01 NaN
2014-07-15 0.02 1.02
2014-08-25 -0.03 0.97
2014-08-25 0.04 1.04
2014-09-10 0.50 1.50
2014-09-15 -0.30 0.70
How can I make this correct?

You can use the cumulative product of return-relatives.
A return-relative is one-plus that day's return.
>>> start = 1.0
>>> df['llt_close'] = start * (1 + df['llt_return']).cumprod()
>>> df
llt_return llt_close
2014-07-04 0.01 1.0100
2014-07-15 0.02 1.0302
2014-08-25 -0.03 0.9993
2014-08-25 0.04 1.0393
2014-09-10 0.50 1.5589
2014-09-15 -0.30 1.0912
This assumes the price index starts at start on the close of the trading day prior to 2014-07-04.
On 7-04, you have a 1% return and the price index closes at 1 * (1 + .01) = 1.01.
On 7-15, return was 2%; close price will be 1.01 * (1 + .02) = 1.0302.
Granted, this is not completely realistic given you're forming a price indexing from irregular-frequency data (missing dates), but hopefully this answers your question.

Related

How to get the column name of a dataframe from values in a numpy array

I have a df with 15 columns:
df.columns:
0 class
1 name
2 location
3 income
4 edu_level
--
14 marital_status
after some transformations I got an numpy.ndarray with shape (15,3) named loads:
0.52 0.33 0.09
0.20 0.53 0.23
0.60 0.28 0.23
0.13 0.45 0.41
0.49 0.9
so on so on so on
So, 3 columns with 15 values.
What I need to do:
I want to get the df column name of the values from the first column of loads that are greater then .50
For this example, the columns of df related to the first column of loadswith values higher than 0.5 should return:
0 Class
2 Location
Same for the second column of loads, should return:
1 name
3 income
4 edu_level
and the same logic to the 3rd column of loads.
I managed to get the numparray loads they way I need it but I am having a bad time with this last part. I know I can simple manually pick the columns but this will be a hard task when df has more than 15 features.
Can anyone help me, please?
given your threshold you can create a boolean array in order to filter df.columns:
threshold = .5
for j in range(loads.shape[1]):
print(df.columms[loads[:,j]>threshold])

Sorting data from a large text file and convert them into an array

I have a text file that contain some data.
#this is a sample file
# data can be used for practice
total number = 5
t=1
dx= 10 10
dy= 10 10
dz= 10 10
1 0.1 0.2 0.3
2 0.3 0.4 0.1
3 0.5 0.6 0.9
4 0.9 0.7 0.6
5 0.4 0.2 0.1
t=2
dx= 10 10
dy= 10 10
dz= 10 10
1 0.11 0.25 0.32
2 0.31 0.44 0.12
3 0.51 0.63 0.92
4 0.92 0.72 0.63
5 0.43 0.21 0.14
t=3
dx= 10 10
dy= 10 10
dz= 10 10
1 0.21 0.15 0.32
2 0.41 0.34 0.12
3 0.21 0.43 0.92
4 0.12 0.62 0.63
5 0.33 0.51 0.14
My aim is to read the file, find out the row where column value is 1 and 5 and store them as multidimensional array. like for 1 it will be a1=[[0.1, 0.2, 0.3],[0.11, 0.25, 0.32],[0.21, 0.15, 0.32]] and for 5 it will be a5=[[0.4, 0.2, 0.1],[0.43, 0.21, 0.14],[0.33, 0.51, 0.14]].
Here is my code that I have written,
import numpy as np
with open("position.txt","r") as data:
lines = data.read().split(sep='\n')
a1 = []
a5 = []
for line in lines:
if(line.startswith('1')):
a1.append(list(map(float, line.split()[1:])))
elif (line.startswith('5')):
a5.append(list(map(float, line.split()[1:])))
a1=np.array(a1)
a5=np.array(a5)
My code is working perfectly with my sample file that I have uploaded but in real case my file is quite larger (2gb). Handling that with my code raise memory error. How can I solve this issue? I have 96GB in my workstation.
There are several things to improve:
Don't attempt to load the entire text file in memory (that will save 2 GB).
Use numpy arrays, not lists, for storing numerical data.
Use single-precision floats rather than double-precision.
So, you need to estimate how big your array will be. It looks like there may be 16 million records for 2 GB of input data. With 32-bit floats, you need 16e6*2*4=128 MB of memory. For a 500 GB input, it will fit in 33 GB memory (assuming you have the same 120-byte record size).
import numpy as np
nmax = int(20e+6) # take a bit of safety margin
a1 = np.zeros((nmax, 3), dtype=np.float32)
a5 = np.zeros((nmax, 3), dtype=np.float32)
n1 = n5 = 0
with open("position.txt","r") as data:
for line in data:
if '0' <= line[0] <= '9':
values = np.fromstring(line, dtype=np.float32, sep=' ')
if values[0] == 1:
a1[n1] = values[1:]
n1 += 1
elif values[0] == 5:
a5[n5] = values[1:]
n5 += 1
# trim (no memory is released)
a1 = a1[:n1]
a5 = a5[:n5]
Note that float equalities (==) are generally not recommended, but in the case of value[0]==1, we know that it's a small integer, for which float representations are exact.
If you want to economize on memory (for example if you want to run several python processes in parallel), then you could initialize the arrays as disk-mapped arrays, like this:
a1 = np.memmap('data_1.bin', dtype=np.float32, mode='w+', shape=(nmax, 3))
a5 = np.memmap('data_5.bin', dtype=np.float32, mode='w+', shape=(nmax, 3))
With memmap, the files won't contain any metadata on data type and array shape (or human-readable descriptions). I'd recommend that you convert the data to npz format in a separate job; don't run these jobs in parallel because they will load the entire array in memory.
n = 3
a1m = np.memmap('data_1.bin', dtype=np.float32, shape=(n, 3))
a5m = np.memmap('data_5.bin', dtype=np.float32, shape=(n, 3))
np.savez('data.npz', a1=a1m, a5=a5m, info='This is test data from SO')
You can load them like this:
data = np.load('data.npz')
a1 = data['a1']
Depending on the balance between cost of disk space, processing time, and memory, you could compress the data.
import zlib
zlib.Z_DEFAULT_COMPRESSION = 3 # faster for lower values
np.savez_compressed('data.npz', a1=a1m, a5=a5m, info='...')
If float32 has more precision than you need, you could truncate the binary representation for better compression.
If you like memory-mapped files, you can save in npy format:
np.save('data_1.npy', a1m)
a1 = np.load('data_1.npy', mmap_mode='r+')
But then you can't use compression and you'll end up with many metadata-less files (except array size and datatype).

How to populate subsequent rows based on previous row value and value from another column in Python Pandas?

I have the following df.
cases percent_change
100 0.01
NaN 0.00
NaN -0.001
NaN 0.05
For the next rows (starting in the second row) from the cases column, it's calculated as next cases = previous cases * (1 + previous percent_change), or for the row below the 100, it is calculated as 100 * (1 + 0.01) = 101. Thus, it should populate like so
cases percent_change
100 0.01
101 0.00
101 -0.001
100.899 0.05
I want to ignore the first row (or 100). Here is my code which is not working
df.loc[1:, 'cases'] = df['cases'].shift(1) * (1 + df['percent_change'].shift(1))
Tried this as well with no success
df.loc[1:, 'cases'] = df.loc[1:, 'cases'].shift(1) * (1 + df.loc[1:, 'percent_change'].shift(1))
df['cases'] = (df.percent_change.shift(1).fillna(0) + 1).cumprod() * df.at[0, 'cases']
print(df)
Prints:
cases percent_change
0 100.000 0.010
1 101.000 0.000
2 101.000 -0.001
3 100.899 0.050

Creating a new column into a dataframe based on conditions

For the dataframe df :
dummy_data1 = {'category': ['White', 'Black', 'Hispanic','White'],
'Pop':['75','85','90','100'],'White_ratio':[0.6,0.4,0.7,0.35],'Black_ratio':[0.3,0.2,0.1,0.45], 'Hispanic_ratio':[0.1,0.4,0.2,0.20] }
df = pd.DataFrame(dummy_data1, columns = ['category', 'Pop','White_ratio', 'Black_ratio', 'Hispanic_ratio'])
I want to add a new column to this data frame,'pop_n', by first checking the category, and then multiplying the value in 'Pop' by the corresponding ratio value in the columns. For the first row,
the category is 'White' so it should multiply 75 with 0.60 and put 45 in pop_n column.
I thought about writing something like :
df['pop_n']= (df['Pop']*df['White_ratio']).where(df['category']=='W')
this works but just for one category.
I will appreciate any helps with this.
Thanks.
Using DataFrame.filter and DataFrame.lookup:
First we use filter to get the columns with ratio in the name. Then split and keep the first word before the underscore only.
Finally we use lookup to match the category values to these columns.
# df['Pop'] = df['Pop'].astype(int)
df2 = df.filter(like='ratio').rename(columns=lambda x: x.split('_')[0])
df['pop_n'] = df2.lookup(df.index, df['category']) * df['Pop']
category Pop White_ratio Black_ratio Hispanic_ratio pop_n
0 White 75 0.60 0.30 0.1 45.0
1 Black 85 0.40 0.20 0.4 17.0
2 Hispanic 90 0.70 0.10 0.2 18.0
3 White 100 0.35 0.45 0.2 35.0
Locate the columns that have underscores in their names:
to_rename = {x: x.split("_")[0] for x in df if "_" in x}
Find the matching factors:
stack = df.rename(columns=to_rename)\
.set_index('category').stack()
factors = stack[map(lambda x: x[0]==x[1], stack.index)]\
.reset_index(drop=True)
Multiply the original data by the factors:
df['pop_n'] = df['Pop'].astype(int) * factors
# category Pop White_ratio Black_ratio Hispanic_ratio pop_n
#0 White 75 0.60 0.30 0.1 45
#1 Black 85 0.40 0.20 0.4 17
#2 Hispanic 90 0.70 0.10 0.2 18
#3 White 100 0.35 0.45 0.2 35

Pandas is converting month 10 into month 1. Is there a format issue here?

I have the following DataFrame
data inflation
0 2000.01 0.62
1 2000.02 0.13
2 2000.03 0.22
3 2000.04 0.42
4 2000.05 0.01
5 2000.06 0.23
6 2000.07 1.61
7 2000.08 1.31
8 2000.09 0.23
9 2000.10 0.14
Note that the format of the Year Month is with a dot
When I try to convert to DateTime as in:
inflation.data = pd.to_datetime(inflation.data, format='%Y.%m')
I get both line 0 and line 9 as 2000-01-01
That means pandas is automatically changing .10 into .01
Is that a bug? or just a format issue?
You're actually using the formatting codes in pandas slightly incorrectly.
Look at the Pandas helpfile
pandas.to_datetime(*args, **kwargs)[source]
Convert argument to datetime.
Parameters:
arg : string, datetime, list, tuple, 1-d array, Series
you appear to be feeding it float64s when it probably expects strings
Try the following code.
Or convert your inflation.data to string (use inflation.data.apply(str))
f0=['2000.01',
'2000.02',
'2000.03',
'2000.04',
'2000.05',
'2000.06',
'2000.07',
'2000.08',
'2000.09',
'2000.10']
inflation=pd.DataFrame(f0,columns={'data'})
inflation.data=pd.to_datetime(inflation.data,format='%Y.%m')
output
Out[3]:
0 2000-01-01
1 2000-02-01
2 2000-03-01
3 2000-04-01
4 2000-05-01
5 2000-06-01
6 2000-07-01
7 2000-08-01
8 2000-09-01
9 2000-10-01
Name: data, dtype: datetime64[ns]
This is an interesting problem. The astype() construct is converting .10 to .01 and you can't use any split methods on the current float type.
Here is my take on this:
Use python math module modf function which returns the fractional and integer parts of x.
Now round the year and month data and convert to string for to_datetime to interpret.
import math
df['Year']= df.data.apply(lambda x: round(math.modf(x)[1])).astype(str)
df['Month']= df.data.apply(lambda x: round((math.modf(x)[0])*100)).astype(str)
df = df.drop('data', axis = 1)
df['Date'] = pd.to_datetime(df.Year+':'+df.Month, format = '%Y:%m')
df = df.drop(['Year', 'Month'], axis = 1)
You get
inflation Date
0 0.62 2000-01-01
1 0.13 2000-02-01
2 0.22 2000-03-01
3 0.42 2000-04-01
4 0.01 2000-05-01
5 0.23 2000-06-01
6 1.61 2000-07-01
7 1.31 2000-08-01
8 0.23 2000-09-01
9 0.14 2000-10-01

Resources