simple Granger Casuality test using statsmodels.tsa.grangercausalitytests - python-3.x

I have several time-series files ( 540 rows x 6 columns ) that i would like to do a simple Granger Casuality test using statsmodels.tsa.grangercausalitytests
from statsmodels.tsa.stattools import grangercausalitytests
my pandas dataframe ( df) contains the data in the following format
i tried to print the tests using Open and close columns with following:
print(grangercausalitytests([df[Open], df[Close]], maxlag=15, addconst=True, verbose=True))
but it does not work. Is there a way to perform Granger test on each Column ( Open, High, Low ) with Close i.e Open and close, High and close , low and close )
Epochtime Open High Low Close Vol
1486094520, 808.11000, 808.11000, 808.11000, 808.11000, 100
1486094580, 809.45000, 809.45000, 809.45000, 809.45000, 100
1486094820, 809.99000, 809.99000, 809.99000, 809.99000, 100
1486095540, 811.45000, 811.45000, 811.45000, 811.45000, 100
1486095840, 811.30000, 811.30000, 811.01000, 811.01000, 300
1486095900, 810.76000, 810.76000, 810.76000, 810.76000, 100
1486096200, 812.00000, 812.00000, 812.00000, 812.00000, 100

It requires 2-dimensional array, try this:
print(grangercausalitytests(df[['Open', 'Close']], maxlag=15, addconst=True, verbose=True))

Related

How can I read and process 100 bytes at a time from a large CSV file?

The below csv is only a snippet of my main data file.
customer.csv
customer_id,order_id,number_of_items
10,4736,9
5,3049,1
1,4689,3
6,4114,9
1,4524,15
2,3727,16
3,3507,7
7,3988,3
5,4993,16
6,1945,4
7,3081,7
3,3707,2
5,1739,12
9,4167,17
7,3242,12
2,3109,10
10,2197,20
10,3528,13
8,4917,2
5,1713,19
8,4224,4
7,2160,2
10,2044,19
10,2956,8
3,3906,2
5,2288,16
7,1854,20
7,4404,2
9,1622,2
7,3685,2
10,2755,10
3,3390,10
6,1424,6
3,2127,15
4,1221,15
9,2994,14
1,1413,13
7,2771,7
3,4579,13
10,2208,4
CURRENTLY ALL I HAVE
import os
os.path.getsize("customer.csv") # outputs, 424 bytes
HOW I THINK I NEED TO PROCEED
I think I need to do something with open csv and read bytes? Then look at each row bit wise?
Please note, I am not looking specifically for someone to just give me an answer on how to do this (although that would be appreciated). Therefore, if someone could just point me in the right direction or give me some topics to look into that would be great. Side note, I know I am supposed to use encoding and decoding somewhere for this task.
This script will use the csv to load the data from customer.csv and compute the average using the builtin statistics module:
import csv
from statistics import mean
with open('customer.csv', newline='') as csvfile:
data = csv.DictReader(csvfile)
# group the customers by customer_id
customers = {}
for order in data:
customers.setdefault(order['customer_id'], []).append(int(order['number_of_items']))
# print the `average`:
print('{:<15} {}'.format('customer_id', 'average'))
for k, v in customers.items():
print('{:<15} {:.2f}'.format(k, mean(v)))
Prints:
customer_id average
10 11.86
5 12.80
1 10.33
6 6.33
2 13.00
3 8.17
7 6.88
9 11.00
8 3.00
4 15.00

Resampling Time Series Data (Pandas Python 3)

Trying to convert data at daily frequency to weekly frequency.
In:
weeklyaaapl = pd.DataFrame()
weeklyaapl['Open'] = aapl.Open.resample('W').iloc[0]
#here I am trying to take the first value of the aapl.Open,
#that falls within the week.
Out:
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
I want the true open (the first open that prints for the week) (the open of the first day in that week).
It instead wants me to take the mean of the daily open values for a given week using .mean(), which is not the information I need.
Can't seem to interpret the error, documentation isn't helping either.
I think you need.
aapl.resample('W').first()
Output:
Open High Low Close Volume
Date
2010-01-10 30.49 30.64 30.34 30.57 123432050
2010-01-17 30.40 30.43 29.78 30.02 115557365
2010-01-24 29.76 30.74 29.61 30.72 182501620
2010-01-31 28.93 29.24 28.60 29.01 266424802
2010-02-07 27.48 28.00 27.33 27.82 187468421

How do read a SEC txt-file into a pandas dataframe?

I am trying to use SEC (U.S. Security and Exchange Commision data). The SEC provides useful data in a txtformat. I am using
Financial Statement Data Sets for the second quarter of 2017. You can find the data I use here.
I try to read the txtfiles into a pandas dataframe. I tried it the following ways:
sub = pd.read_fwf('sub.txt')
sub_1 = pd.read_csv('sub.txt')
I get no error with using Pandas' read_fwf function - but the output is utter rubbish. Here is the head of the dataframe:
adsh cik name sic countryba stprba cityba zipba bas1 bas2 baph countryma stprma cityma zipma mas1 mas2 countryinc stprinc ein former changed afs wksi fye form period fy fp filed accepted prevrpt detail instance nciks aciks Unnamed: 1
0 0000002178-17-000038\t2178\tADAMS RESOURCES & ... NaN
1 0000002488-17-000107\t2488\tADVANCED MICRO DEV... NaN
I do get an error when using read_csv: Error tokenizing data. C error: Expected 2 fields in line 7, saw 3
Any ideas on how tor read the data into a pandas dataframe?
It looks like the files are tab separated - that's why you're seeing \t in the results. pandas read_csv defaults to comma separated values, so you have to change the separator. This is controlled by the sep parameter. In addition, you will need to provide the proper encoding (errors are thrown when trying to read the num, pre, and tag files). Generally ISO-8859-1 is a good choice.
#import pandas
import pandas as pd
#read in the .txt file and choose a separator and encoding standard
df = pd.read_csv('sub.txt', sep='\t', encoding='ISO-8859-1')
#output the results
print(df)
adsh cik name \
0 0000002178-17-000038 2178 ADAMS RESOURCES & ENERGY, INC.
1 0000002488-17-000107 2488 ADVANCED MICRO DEVICES INC
2 0000002969-17-000019 2969 AIR PRODUCTS & CHEMICALS INC /DE/
3 0000002969-17-000024 2969 AIR PRODUCTS & CHEMICALS INC /DE/
4 0000003499-17-000010 3499 ALEXANDERS INC
5 0000003545-17-000043 3545 ALICO INC
6 0000003570-17-000073 3570 CHENIERE ENERGY INC

Use matlab to search excel data file for time range and copy data into variable

In my excel file I have a time column in 12 hr clock time and a bunch of data columns. I have pasted a snippet of it in this post as a code since i cant attach a file. I am trying to build a gui that will take an input from the user like so:
start time: 7:29:32 AM
End time: 7:29:51 AM
Then do the following:
calculate the time that has passed in seconds (should be just a row count, data is gathered once a second)
copy the data in the time range from the "Data 3" column in to a variable perform other calculations on the data copied as needed
I am having some trouble figuring out what to do to search the time data and find its location since it imports as text with xlsread. any ideas?
The data looks like this:
Time Data 1 Data 2 Data 3 Data 4 Data 5
7:29:25 AM 0.878556385 0.388400561 0.076890401 0.93335277 0.884750618
7:29:26 AM 0.695838393 0.712762566 0.014814069 0.81264949 0.450303694
7:29:27 AM 0.250846937 0.508617941 0.24802015 0.722457624 0.47119616
7:29:28 AM 0.206189924 0.82970364 0.819163787 0.060932817 0.73455323
7:29:29 AM 0.161844331 0.768214077 0.154097877 0.988201094 0.951520263
7:29:30 AM 0.704242494 0.371877481 0.944482485 0.79207359 0.57390951
7:29:31 AM 0.072028024 0.120263127 0.577396985 0.694153791 0.341824004
7:29:32 AM 0.241817775 0.32573323 0.484644494 0.377938298 0.090122672
7:29:33 AM 0.500962945 0.540808907 0.582958676 0.043377373 0.041274613
7:29:34 AM 0.087742217 0.596508236 0.020250297 0.926901109 0.45960323
7:29:35 AM 0.268222071 0.291034947 0.598887588 0.575571111 0.136424853
7:29:36 AM 0.42880255 0.349597405 0.936733938 0.232128788 0.555528823
7:29:37 AM 0.380425154 0.162002488 0.208550466 0.776866494 0.79340504
7:29:38 AM 0.727940393 0.622546124 0.716007768 0.660480612 0.02463804
7:29:39 AM 0.582772435 0.713406643 0.306544291 0.225257421 0.043552277
7:29:40 AM 0.371156954 0.163821476 0.780515577 0.032460418 0.356949005
7:29:42 AM 0.484167263 0.377878242 0.044189636 0.718147456 0.603177625
7:29:43 AM 0.294017186 0.463360581 0.962296024 0.504029061 0.183131098
7:29:44 AM 0.95635086 0.367849494 0.362230918 0.984421096 0.41587606
7:29:45 AM 0.198645523 0.754955312 0.280338922 0.79706146 0.730373691
7:29:46 AM 0.058483961 0.46774544 0.86783339 0.147418954 0.941713252
7:29:47 AM 0.411193343 0.340857813 0.162066261 0.943124515 0.722124394
7:29:48 AM 0.389312994 0.129281042 0.732723258 0.803458815 0.045824426
7:29:49 AM 0.549633038 0.73956852 0.542532728 0.618321989 0.358525184
7:29:50 AM 0.269925317 0.501399748 0.938234302 0.997577871 0.318813506
7:29:51 AM 0.798825842 0.24038537 0.958224157 0.660124357 0.07469288
7:29:52 AM 0.963581196 0.390150081 0.077448543 0.294604314 0.903519943
7:29:53 AM 0.890540963 0.50284339 0.229976565 0.664538451 0.926438543
7:29:54 AM 0.46951573 0.192568637 0.506730373 0.060557482 0.922857391
7:29:55 AM 0.56552394 0.952136998 0.739438663 0.107518765 0.911045415
7:29:56 AM 0.433149875 0.957190309 0.475811126 0.855705733 0.942255155
and this is the code I am using:
[Data,Text] = xlsread('C:\Users\data.xlsx',2);
IndexStart=strmatch('7:29:29 AM',Text,'exact'); %start time
IndexEnd=strmatch('2:30:29 PM',Text,'exact'); %end time
seconds = IndexEnd-IndexStart;
TestData = Data([IndexStart: IndexEnd],:);
You probably need to:
Use strfind to find the relevant string in the data imported
Use datenum to convert the date to serial date numbers, to be able to calculate the elapsed time between the two points.
It would help if you posted your code so far though.
EDIT based on comments:
Here's what I would do for cycling through the list of start and end times:
[Data,Text] = xlsread('C:\Users\data.xlsx',2);
start_times = {'7:29:29 AM','7:29:35 AM','7:29:44 AM','7:29:49 AM'}; % etc...
end_times = {'2:30:29 PM','2:30:59 PM','2:31:22 PM','2:32:49 PM'}; % etc...
elapsed_time = zeros(length(start_times),1);
TestData = cell(length(start_times),1); % need a cell array because data can/will be of unequal lengths
for k=1:length(start_times)
IndexStart=strmatch(start_times{k},Text,'exact'); %start time
IndexEnd=strmatch(end_times{k},Text,'exact'); %end time
elapsed_time(k) = IndexEnd-IndexStart;
TestData{k} = Data([IndexStart: IndexEnd],:);
end
Use the "Import Data" from the Variable Tag in the Home menu. There you can set how you want the data to be imported like. With or without heading and the format.

How to determine a formula for execution time given quantitative data, Excel, trendlines, monte carlo simulation

Can I get your help on some Maths and possibly Excel?
I have benchmarked my app increasing the number of iterations and number of obligors recording the time taken in seconds with the following result:
200 400 600 800 1000 1200 1400 1600 1800 2000
20000 15.627681 30.0968663 44.7592684 60.9037558 75.8267358 90.3718977 105.8749983 121.0030672 135.9191249 150.3331682
40000 31.7202111 62.3603882 97.2085204 128.8111731 156.2443206 186.6374271 218.324317 249.2699288 279.6008184 310.9970803
60000 47.0708635 92.4599437 138.874287 186.0576007 231.2181381 280.541207 322.9836878 371.3076757 413.4058622 459.6208335
80000 60.7346238 120.3216303 180.471169 241.668982 300.4283548 376.9639188 417.5231669 482.6288981 554.9740194 598.0394434
100000 76.7535915 150.7479245 227.5125656 304.3908046 382.5900043 451.6034296 526.0730786 609.0358776 679.0268121 779.6887277
120000 90.4174626 179.5511355 269.4099593 360.2934453 448.4387573 537.1406039 626.7325734 727.6132992 807.4767327 898.307638
How can I now come up with a function for T (time taken in seconds) as an expression of number of obligors O and number of iterations I
Thanks
I'm not quite sure of the data involved due to the question construction/presentation.
Assuming you're looking for y = f(x). If you load the data into Excel, you can use the methods SLOPE and INTERCEPT on the data ranges to derive an expression of the form
y = mx+c
and thus a linear function.
If you want a quadratic or cubic, you can use LINEST with a column of time data squared/cubed etc. to give you quadratic/cubic parameters, and thus derive an appropriate higher order function.
Spoke to one of the quants here the function is of the from T = KNO, where T is time, K some constant, N iterations, O obligors.
Rearrange for K = T/(NO), plug this into my sample data, take the average of all sample points, use the Std dev for the error
I did this for my data and get:
T = 3.81524E-06 * N * O (with 1.9% error), this is a pretty good approximation.
Create a chart in Excel, add a trendline, and select to have the equation displayed on the chart.
To clarify: You have tabular data below which you want to fit to some function f(O,I)=t?
200 400 600 800 1000 1200 1400 1600 1800 2000
20000 15.627681 30.0968663 44.7592684 60.9037558 75.8267358 90.3718977 105.8749983 121.0030672 135.9191249 150.3331682
40000 31.7202111 62.3603882 97.2085204 128.8111731 156.2443206 186.6374271 218.324317 249.2699288 279.6008184 310.9970803
60000 47.0708635 92.4599437 138.874287 186.0576007 231.2181381 280.541207 322.9836878 371.3076757 413.4058622 459.6208335
80000 60.7346238 120.3216303 180.471169 241.668982 300.4283548 376.9639188 417.5231669 482.6288981 554.9740194 598.0394434
100000 76.7535915 150.7479245 227.5125656 304.3908046 382.5900043 451.6034296 526.0730786 609.0358776 679.0268121 779.6887277
120000 90.4174626 179.5511355 269.4099593 360.2934453 448.4387573 537.1406039 626.7325734 727.6132992 807.4767327 898.307638
A rough guess looks like both O & I are linear. So f is in the form t = aO + bI + c. Plug in a few (O,I,t) and see what a,b,c should be.

Resources