I would like to create a historical dataset on which I would like to add all NEW records of a dataset.
For NEW records I mean new records or modified records: all those that are the same for all columns except the 'reference_date' one.
I insert here the piece of code that allows me to do it on all columns, but I can't figure out how to implement the exclusion condition of a column.
Inputs:
historical (previous):
ID
A
B
dt_run
1
abc
football
2022-02-14 21:00:00
2
dba
volley
2022-02-14 21:00:00
3
wxy
tennis
2022-02-14 21:00:00
input_df (new data):
ID
A
B
1
abc
football
2
dba
football
3
wxy
tennis
7
abc
tennis
DESIRED OUTPUT (new records in bold)
ID
A
B
dt_run
1
abc
football
2022-02-14 21:00:00
2
dba
volley
2022-02-15 21:00:00
3
wxy
tennis
2022-02-01 21:00:00
2
dba
football
2022-03-15 14:00:00
7
abc
tennis
2022-03-15 14:00:00
My code which doesn't work:
#incremental(snapshot_inputs=['input_df'])
#transform(historical = Output(....), input_df = Input(....))
def append(input_df, historical):
input_df = input_df.dataframe().withColumn('dt_run', F.to_timestamp(F.lit(datetime.now())))
historical = historical.write_dataframe(dataset_input_df.distinct()\
.subtract(historical.dataframe('previous', schema=input_df.schema)))
return historical
I've tested the following script and it works. In the following example, you don't need to drop/select columns. Using withColumn you create the missing column in input_df and also change the values in the existing column in historical. This way you can safely do subtract on the whole dataframe. Later, since you append the data rows, the old historical rows will stay intact with their old timestamps.
from transforms.api import transform, Input, Output, incremental
from pyspark.sql import functions as F
from datetime import datetime
#incremental(snapshot_inputs=['input_df'])
#transform(
historical=Output("...."),
input_df=Input("....")
)
def append(input_df, historical):
now = datetime.now()
df_inp = input_df.dataframe().withColumn('dt_run', F.to_timestamp(F.lit(now)))
df_hist = historical.dataframe('previous', df_inp.schema).withColumn('dt_run', F.to_timestamp(F.lit(now)))
historical.write_dataframe(df_inp.subtract(df_hist))
You can use code similar to what is found here.
Once you have combined the previous output with the new input, you just need to use PySpark to determine which is the newest row and only keep that row instead of line 19.
A possible implementation for this could be using F.row_number e.g.
import pyspark.sql.window as W
import pyspark.sql.functions as F
#incremental()
#transform(
input_df=Input('/examples/input_df'),
output_df=Output('/examples/output_df')
)
def incremental_group_by(input_df, output_df):
# Get new rows
new_input_df = input_df.dataframe().withColumn('dt_run', F.to_timestamp(F.lit(datetime.now())))
# Union with the old rows
out_schema = new_input_df.schema
both_df = new_input_df.union(
output_df.dataframe('previous', schema=out_schema)
)
partition_cols = ["A","B"]
# Get most recent row
totals_df = totals_df.withColumn("row_number",
F.row_number().over(W.Window.partitionBy(*partition_cols).orderBy(F.desc("dt_run")))
).where(F.col("row_number") == 1).drop("row_number")
# To fully replace the output, we always set the output mode to 'replace'.
# Checkpoint the totals dataframe before changing the output mode.
both_df.localCheckpoint(eager=True)
output_df.set_mode('replace')
output_df.write_dataframe(both_df.select(out_schema.fieldNames()))
Edit : The main difference between my answer and the one above is whether you want to have multiple rows in the ouput where 'A' and 'B' are the same. It depends on your usecase which one is better!
I have used the union() function along with dropDulicates()
from datetime import datetime
from pyspark.sql.functions import *
from pyspark.sql.types import *
import pyspark.sql.functions as fx
def append(df_input, df_hist):
df_union = df_hist.unionByName(df_input,allowMissingColumns=True).dropDuplicates(['ID','A','B'])
historical = df_union.withColumn('dt_run', fx.coalesce('dt_run', fx.to_timestamp(fx.lit(datetime.now()))))
return historical
df_hist= spark.createDataFrame( [(1,'abc','football','2022-02-14 21:00:00'),(2,'dba','volley','2022-02-14 21:00:00'),(3,'wxy','tennis','2022-02-14 21:00:00')],schema= ['ID','A','B','dt_run'])
df_hist = df_hist.withColumn('dt_run',fx.col('dt_run').cast('timestamp'))
df_input= spark.createDataFrame([(1,'abc','football'),(2,'dba','football'),(3,'wxy','tennis'),(7,'abc','tennis')],schema= ['ID','A','B'])
df_historical = append(df_input,df_hist)
df_historical.show(truncate=False)
I have run into a problem in transforming a dataframe. I'm trying to widen a table grouped on a datetime column, but cant seem to make it work. I have tried to transpose it, and pivot it but cant really make it the way i want it.
Example table:
datetime value
2022-04-29T02:00:00.000000000 5
2022-04-29T03:00:00.000000000 6
2022-05-29T02:00:00.000000000 5
2022-05-29T03:00:00.000000000 7
What I want to achieve is:
index date 02:00 03:00
1 2022-04-29 5 6
2 2022-05-29 5 7
The real data has one data point from 00:00 - 20:00 fore each day. So I guess a loop would be the way to go to generate the columns.
Does anyone know a way to solve this, or can nudge me in the right direction?
Thanks in advance!
Assuming from details you have provided, I think you are dealing with timeseries data and you have data from different dates acquired at 02:00:00 and 03:00:00. Please correct me if I am wrong.
First we replicate your DataFrame object.
import datetime as dt
from io import StringIO
import pandas as pd
data_str = """2022-04-29T02:00:00.000000000 5
2022-04-29T03:00:00.000000000 6
2022-05-29T02:00:00.000000000 5
2022-05-29T03:00:00.000000000 7"""
df = pd.read_csv(StringIO(data_str), sep=" ", header=None)
df.columns = ["date", "value"]
now we calculate unique days where you acquired data:
unique_days = df["date"].apply(lambda x: dt.datetime.strptime(x[:-3], "%Y-%m-%dT%H:%M:%S.%f").date()).unique()
Here I trimmed last 3 0s from your date because it would get complicated to parse. We convert the datetime to datetime object and get unique values
Now we create a new empty df in desired form:
new_df = pd.DataFrame(columns=["date", "02:00", "03:00"])
after this we can populate the values:
for day in unique_days:
new_row_data = [day] # this creates a row of 3 elems, which will be inserted into empty df
new_row_data.append(df.loc[df["date"] == f"{day}T02:00:00.000000000", "value"].values[0]) # here we find data for 02:00 for that date
new_row_data.append(df.loc[df["date"] == f"{day}T03:00:00.000000000", "value"].values[0]) # here we find data for 03:00 same day
new_df.loc[len(new_df)] = new_row_data # now we insert row to last pos
this should give you:
date 02:00 03:00
0 2022-04-29 5 6
1 2022-05-29 5 7
I am facing an issue while plotting graph in matplotlib as I am unable to convert data exactly to give inputs to matplotlib
Here is my data
date,GOOG,AAPL,FB,BABA,AMZN,GE,AMD,WMT,BAC,GM,T,UAA,SHLD,XOM,RRC,BBY,MA,PFE,JPM,SBUX
1989-12-29,,0.117203,,,,0.352438,3.9375,3.48607,1.752478,,2.365775,,,1.766756,,0.166287,,0.110818,1.827968,
1990-01-02,,0.123853,,,,0.364733,4.125,3.660858,1.766686,,2.398184,,,1.766756,,0.173216,,0.113209,1.835617,
1990-01-03,,0.124684,,,,0.36405,4.0,3.660858,1.780897,,2.356516,,,1.749088,,0.194001,,0.113608,1.896803,
1990-01-04,,0.1251,,,,0.362001,3.9375,3.641439,1.743005,,2.403821,,,1.731422,,0.190537,,0.115402,1.904452,
1990-01-05,,0.125516,,,,0.358586,3.8125,3.602595,1.705114,,2.287973,,,1.722587,,0.190537,,0.114405,1.9121,
1990-01-08,,0.126347,,,,0.360635,3.8125,3.651146,1.714586,,2.326588,,,1.749088,,0.17668,,0.113409,1.9121,
1990-01-09,,0.1251,,,,0.353122,3.875,3.55404,1.714586,,2.273493,,,1.713754,,0.17668,,0.111017,1.850914,
1990-01-10,,0.119697,,,,0.353805,3.8125,3.55404,1.681432,,2.210742,,,1.722587,,0.173216,,0.11301,1.843264,
1990-01-11,,0.11471,,,,0.353805,3.875,3.592883,1.667222,,2.23005,,,1.731422,,0.169751,,0.111814,1.82032,
I have converted it as following dataframe
AAPL
2016 0.333945
2017 0.330923
2018 0.321857
2019 0.312790
<class 'pandas.core.frame.DataFrame'>
by using following code:
import pandas as pd
df = pd.read_csv("portfolio.txt")
companyname = "AAPL"
frames = df.loc[:, df.columns.str.startswith(companyname)]
l1 = frames.loc['2015-6-1':'2019-6-10']
print(l1)
print(type(l1))
plt.plot(li1, label="Company Past Information")
plt.xlabel('Risk Aversion')
plt.ylabel('Optimal Investment Portfolio')
plt.title('Optimal Investment Portfolio For Low, Medium & High')
plt.legend()
plt.show()
After plotting to matplotlib I getting output correctly for which data is existed.
But for which data is not available graph is plotting wrongly.
GOOG
2016 NaN
2017 NaN
2018 NaN
2019 NaN
Due to this I am unable to plot graph correctly
Please help out of this
Thanks in advance
If you're reading you data in from a .csv using pandas you can:
import pandas as pd
df = pd.csv_read(your_csv, parse_dates=[0]) # 0 means your dates are in the first column
Otherwise you can convert your data column to datatime using:
import pandas as pd
df['date'] = pd.to_datetime(df['date'])
When using matplotlib then you can:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(df.iloc[:, 0], df.loc[:, some_column])
plt.show()
./test.csv looks like:
price datetime
1 100 2019-10-10
2 150 2019-11-10
...
import pandas as pd
import datetime as date
import datetime as time
from datetime import datetime
from datetime import timedelta
csv_df = pd.read_csv('./test.csv')
today = datetime.today()
csv_df['datetime'] = csv_df['expiration_date'].apply(lambda x: pd.to_datetime(x)) #convert `expiration_date` to datetime Series
def days_until_exp(expiration_date, today):
diff = (expiration_date - today)
return [diff]
csv_df['days_until_expiration'] = csv_df['datetime'].apply(lambda x: days_until_exp(csv_df['datetime'], today))
I am trying to iterate over a specific column in my DateFrame labeled csv_df['datetime'] which in each cell has just one value, a date, and do a calcation defined by diff.
Then I want the single value diff to be put into the new Series csv_df['days_until_expiration'].
The problem is, it's calculating values for every row (673 rows) and putting all those values in a list in each row of csv_df['days_until_expiration. I realize it may be due to the brackets around [diff], but without them I get an error.
In Excel, I would just do something like =SUM(datetime - price) and click and drag down the rows to have it populate a new column. However, I want to do this in Pandas as it's part of a bigger application.
csv_df['datetime'] is series, so x of apply is each cell of series. You call apply with lambda and days_until_exp(), but you doesn't passing x to it. Therefore, the result is wrong.
Anyway, Without your sample data, I guess that you want to find sum of csv_df['datetime'] - today(). To do this, you don't need apply. Just do direct vectorized operation on series and sum.
I make 2 columns dataframe for sample:
csv_df:
datetime days_until_expiration
0 2019-09-01 NaN
1 2019-09-02 NaN
2 2019-09-03 NaN
Do the following return series of delta between csv_df['datetime'] and today(). I guess you want this::
td = datetime.datetime.today()
csv_df['days_until_expiration'] = (csv_df['datetime'] - td).dt.days
csv_df:
datetime days_until_expiration
0 2019-09-01 115
1 2019-09-02 116
2 2019-09-03 117
OR:
To find sum of all deltas and assign the same sum value to csv_df['days_until_expiration']
csv_df['days_until_expiration'] = (csv_df['datetime'] - td).dt.days.sum()
csv_df:
datetime days_until_expiration
0 2019-09-01 348
1 2019-09-02 348
2 2019-09-03 348
following the parsing of a large pdf document I end up with string in the format in python:
Company Name;(Code) at End of Month;Reason for Alteration No. of Shares;Bond Symbol, etc.; Value, etc.; after Alteration;Remarks
Shares;Shares
TANSEISHA CO.,LTD.;(9743)48,424,071;0
MEITEC CORPORATION;(9744)31,300,000;0
TKC Corporation;(9746)26,731,033;0
ASATSU-DK INC.;(9747);42,155,400;Exercise of Subscription Warrants;0;May 2013 Resolution based 1;0Shares
May 2013 Resolution based 2;0Shares
Would it be possible to transform this into a pandas dataframe as follows where the columns are delimited by the ";". So looking at the above section from the string my df should look like:
Company Name (Code) at End of Month Reason for Alteration ....
Value,etc after Alteration Remarks Shares .....
As additional problem my rows don't always have the same number of strings delimited by ";", meaning that I would need to find a way to see my columns( I don't mind setting like a dataframe with 15 columns and delete afterwards those II do no need)
Thanks
This is a nice opportunity to use StringIO to make your result look like an open file handle so that you can just use pd.read_csv:
In [1]: import pandas as pd
In [2]: from StringIO import StringIO
In [3]: s = """Company Name;(Code) at End of Month;Reason for Alteration No. of Shares;Bond Symbol, etc.; Value, etc.; after Alteration;Remarks
...: Shares;Shares
...: TANSEISHA CO.,LTD.;(9743)48,424,071;0
...: MEITEC CORPORATION;(9744)31,300,000;0
...: TKC Corporation;(9746)26,731,033;0
...: ASATSU-DK INC.;(9747);42,155,400;Exercise of Subscription Warrants;0;May 2013 Resolution based 1;0Shares
...: May 2013 Resolution based 2;0Shares"""
In [4]: pd.read_csv(StringIO(s), sep=";")
Out [4]: Company Name (Code) at End of Month Reason for Alteration No. of Shares Bond Symbol, etc. Value, etc. after Alteration Remarks
0 Shares Shares NaN NaN NaN NaN NaN
1 TANSEISHA CO.,LTD. (9743)48,424,071 0 NaN NaN NaN NaN
2 MEITEC CORPORATION (9744)31,300,000 0 NaN NaN NaN NaN
3 TKC Corporation (9746)26,731,033 0 NaN NaN NaN NaN
4 ASATSU-DK INC. (9747) 42,155,400 Exercise of Subscription Warrants 0.0 May 2013 Resolution based 1 0Shares
5 May 2013 Resolution based 2 0Shares NaN NaN NaN NaN NaN
Note that it does look like there are some obvious data cleanup problems to tackle from here, but that should at least give you a start.
I would split your read in string into a list of list. Possibly use regex to find the beginning of each record (or at least use something that you know where it shows up, it looks like (Code) at End of Month might work) and slice your way through. Something like this:
import re
import pandas as pd
# Start your list of list off with your expected headers
mystringlist = [["Company Name",
"(Code) at End of Month",
"Reason for Alteration",
"Value,etc",
"after Alteration",
"Remarks Shares"]]
# This will be used to store the start and end indexes of each record
indexlist = []
# A recursive function to find the start location of each record. It expects a list of 1s and 0s
def find_start(thestring, startloc=0):
if startloc >=len(thestring):
return
else:
foundindex = thestring.find("1",startloc)
indexlist.append(foundindex)
return find_start(thestring, foundindex+1)
# Split on your delimiter
mystring = thestring.split(";")
# Use a list comprehension to make your list of 1s
# and 0s based on the location of a fixed regular expressible record
stringloc = "".join([1 if re.match(x, "\(\d+\)\d+,\d+,\d+") else 0 for x in mystring])
find_start(stringloc)
# Make your list of list based on found indexes
# We subtract 1 from the index position because we want the element
# that immediately precedes the element we find (it's an easier regex
# to make when it's a consistent structure.
for x in indexlist:
if mystringlist.index(x)+1 != len(indexlist):
mystringlist.append(mystring[x-1:indexlist[indexlist.index(x)+1]-1])
# Turn mystring list into a data frame
mydf = pd.DataFrame(mystringlist)