I'm using Microsoft Azure Machine Learning Studio to try an experiment where I use previous analytics captured about a user (at a time, on a day) to try and predict their next action (based on day and time) so that I can adjust the UI accordingly. So if a user normally visits a certain page every Thursday at 1pm, then I would like to predict that behaviour.
Warning - I am a complete novice with ML, but have watched quite a few videos and worked through tutorials like the movie recommendations example.
I have a csv dataset with userid,action,datetime and would like to train a matchbox recommendation model, which, from my research appears to be the best model to use. I can't see a way to use date/time in the training. The idea being that if I could pass in a userid and the date, then the recommendation model should be able to give me a probably result of what that user is most likely to do.
I get results from the predictive endpoint, but the training endpoint gives the following error:
{
"error": {
"code": "ModuleExecutionError",
"message": "Module execution encountered an error.",
"details": [
{
"code": "18",
"target": "Train Matchbox Recommender",
"message": "Error 0018: Training dataset of user-item-rating triples contains invalid data."
}
]
}
}
Here is a link to a public version of the experiment
Any help would be appreciated.
Thanks.
Maybe this answer could be helpful, you may also take a look on this where you can read:
The problem is probably with the range of rating data. There's an upper limit for rating range, because the training gets expensive if the range between smallest and largest rating is too large.
[...]
One option would be to scale the ratings to a narrower range.
According to this MSDN, please note that you cannot have a gap between the min and max note higher than 100.
So you have to make a pre-processing on your csv file column data (userid, action, datetime etc...) in order to keep all column data in the [0-99] range.
Please see bellow a Python implementation (to share the logic):
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
big_gap_arr = [-250,-2350,850,-120,-1235,3212,1,5,65,48,265,1204,65,23,45,895,5000,3,325,3244,5482] #data with big gap
abs_min = abs(min(big_gap_arr)) #get the absolute minimal value
max_diff= ( max(big_gap_arr) + abs_min ) #get the maximal diff
specific_range_arr=[]
for each_value in big_gap_arr:
new_value = ( 99/1. * float( abs_min + each_value) / max_diff ) #get a corresponding value in the [0,99] range
specific_range_arr.append(new_value)
print specific_range_arr #post computed data => all in range [0,99]
Which give you:
[26.54494382022472, 0.0, 40.449438202247194, 28.18820224719101, 14.094101123595506, 70.3061797752809, 29.71769662921348, 29.76825842696629, 30.526685393258425, 30.31179775280899, 33.05477528089887, 44.924157303370784, 30.526685393258425, 29.995786516853933, 30.27387640449438, 41.01825842696629, 92.90730337078652, 29.742977528089888, 33.813202247191015, 70.71067415730337, 99.0]
Note that all data are now in the [0,99] range
Following this process:
User id could be float instead an integer
Action is an integer (if you got less than 100 actions) or float (if more than 100 actions)
Datetime will be splited in two integer (or one integer and one float), please see bellow:
Concerning:
(A) way to use date/time in the training
You may split your datetime in two column, something like:
one column for the weekday:
0: Sunday
1: Monday
2: Tuesday
[...]
6: Saturday
one column for the time in the day:
0: Between 00:00 & 00:15
1: Between 00:15 & 00:30
2: Between 00:30 & 00:40
[...]
95 : Between 23:45 & 00:00
If you need a better granularity (here it is a 15 min window) you may also use float number for the time column.
So from messing with this for a while, I think I may see where the issue may lie. I think that the first three inputs of the Train Matchbox Recommender would need to be filled in for an accurate prediction. I'll include screenshots of the sample for recommending restaurants, as well.
The first input would be the dataset consisting of the user, item, and rating.
The second input would be the features of each user.
And the third input would be the features of each feature (restaurant in this case).
So to help with the date/time issue, I'm wondering if the data would need to be munged to match something similar to the restaurant and user data.
I know it's not much, but I hope it helps lead you down the right track.
Related
I am interested in which processes/activities contribute most to the Life Cycle Impact Assessment (LCIA) that I am conducting. For this, I run a contribution analysis (see code below). To crosscheck the results of my contribution analysis and to ensure that I get everything right, I wanted to compare the returned contributions with the impact assessment result (lca.score).
The documentation of ca.annotated_top_processes(lca) says: "Returns a list of tuples: (lca score, supply, activity)."
In my understanding, lca.score should be the same value as the sum of all the first values in the tuples that are returned by ca.annotated_top_processes(lca) (the printed values). However, this is not the case. What am I missing? Is there some sort of cut-off applied or did I misunderstand something?
import bw2analyzer as bwa
random_act = db_ei381.random()
lca = bw2data.LCA(
{random_act: 1},
('ReCiPe Midpoint (H) V1.13', 'water depletion', 'WDP')
)
lca.lci()
lca.lcia()
print(lca.score)
# %% Contribution analysis
ca = bwa.ContributionAnalysis()
contributions = ca.annotated_top_processes(lca)
print(sum([i[0] for i in contributions]))
It is not well documented, but you can introduce an argument limit that specifies the number of activities that are considered in the contribution analysis. The default value I think is 25. It is sorted so the most important activities come first. If you write something like this you should see how the result converges to the total score as the number of activities increase:
import matplotlib.pyplot as plt
cutoff = [25,50,100,500,1000,1200]
scores = []
for n in cutoff:
contributions = ca.annotated_top_processes(lca,limit=n)
contr_sum = sum([i[0] for i in contributions])
scores.append(contr_sum)
plt.plot(cutoff,scores)
plt.axhline(lca.score,ls='--',color='r');
I am trying to create month frequency index for time series analysis
The datevariable is a timestamp starts from 2018-10-18 10:50:10 till 2019-11-25 15:09:33. I am getting the error in subject line
i tried removing the nulls but that did not help.
New to python programming, any help
df=pd.read_excel("Source_Data.xlsx",index_col="activation date",parse_dates=True)
df.rename(
columns={
"activation date":"Date",
"Estimate Volumes in HL (ABI)":"Volume"
},
inplace=True
)
dataset=df.filter(["Date","Volume"],axis=1)
dataset.index.freq="MS"
Solution:
dataset.index.asfreq('MS')
Your dataset probably has missing months. You have to fill up every instance of the chosen frequency using "asfreq". It will then automatically set the freq='MS'. You can also fix dataset.index.freq="MS" yourself but it is automatically done for you once you apply asfreq('MS').
I'm trying to locate tremor, which is a type of earthquake with smaller amplitude. I use grid search, which is a method that finds the coordinate where 'the difference between theoretical value and observed value of differential time in seismic wave arrival' becomes minimum.
The code I made is as follows. First I defined two functions that calculate distance between earthquake source and each point on grid, and that calculate travel time of seismic waves using obspy.
def distance(a,i):
return math.sqrt(((ste[a].stats.sac.stla-la[i])**2)+((ste[a].stats.sac.stlo-lo[i])**2))
def traveltime(a):
return model.get_travel_times(source_depth_in_km=35, distance_in_degree=a, phase_list=["S"], receiver_depth_in_km=0)[0].time
Then I conducted grid search using following codes.
di=[(la[i],lo[i],distance(a,i), distance(b,i)) for i in range(len(lo))
for a in range(len(ste))
for b in range(len(ste)) if a<b]
didf=pd.DataFrame(di)
latot=didf[0]
lotot=didf[1]
dia=didf[2]
dib=didf[3]
tt=[]
for i in range(len(di)):
try:
tt.append((latot[i],lotot[i],traveltime(dia[i])-traveltime(dib[i])))
except IndexError:
continue
ttdf=pd.DataFrame(tt)
final=[(win[j],ttdf[0][i],ttdf[1][i],(ttdf[2][i]-shift[j])**2) for i in range(len(ttdf))
for j in range(len(ccdf))]
where la and lo are the list of latitude and longitude coordinates with 0.01 degree interval, and ste is the list of the east components seismogram of each station. I have to get the list 'final' to proceed to the next step.
However, the problem is that it takes too much time to calculate three segments of codes written above. Moreover, the result I get after tens of hours of calculation is 'out of memory' error message. Is there any solution that can reduce both time and memory?
Without access to your dataset, it's a little difficult to debug, but here are a few suggestions for you.
for i in range(len(di)):
try:
tt.append((latot[i],lotot[i],traveltime(dia[i])-traveltime(dib[i])))
except IndexError:
continue
• Given the size of these lists, I think that the Garbage Collector might be slowing down this for loop; you might consider turning it off for the duration of the loop (gc.disable()).
• In theory, the Append statement shouldn't be the source of your performance problems, since it over-allocates:
/* This over-allocates proportional to the list size, making room
* for additional growth. The over-allocation is mild, but is
* enough to give linear-time amortized behavior over a long
* sequence of appends() in the presence of a poorly-performing
* system realloc().
* The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
*/
new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6);
but you already know the size of the array, so you might consider using numpy.zeroes() to fill the list before the for-loop, and use the index to directly address each element. Alternatively, you could just use list comprehensions, as you did earlier, and avoid the problem altogether.
• I see that you've tagged the question with python-3.x, so range() shouldn't be an issue like it was in 2.x (otherwise you would want to consider using xrange()).
If you update your question with more details, I could probably provide a more detailed answer...hope this helps.
I am having a bit of an issue with pandas's rolling function and I'm not quite sure where I'm going wrong. If I mock up two test series of numbers:
df_index = pd.date_range(start='1990-01-01', end ='2010-01-01', freq='D')
test_df = pd.DataFrame(index=df_index)
test_df['Series1'] = np.random.randn(len(df_index))
test_df['Series2'] = np.random.randn(len(df_index))
Then it's easy to have a look at their rolling annual correlation:
test_df['Series1'].rolling(365).corr(test_df['Series2']).plot()
which produces:
All good so far. If I then try to do the same thing using a datetime offset:
test_df['Series1'].rolling('365D').corr(test_df['Series2']).plot()
I get a wildly different (and obviously wrong) result:
Is there something wrong with pandas or is there something wrong with me?
Thanks in advance for any light you can shed on this troubling conundrum.
It's very tricky, I think the behavior of window as int and offset is different:
New in version 0.19.0 are the ability to pass an offset (or
convertible) to a .rolling() method and have it produce variable sized
windows based on the passed time window. For each time point, this
includes all preceding values occurring within the indicated time
delta.
This can be particularly useful for a non-regular time frequency index.
You should checkout the doc of Time-aware Rolling.
r1 = test_df['Series1'].rolling(window=365) # has default `min_periods=365`
r2 = test_df['Series1'].rolling(window='365D') # has default `min_periods=1`
r3 = test_df['Series1'].rolling(window=365, min_periods=1)
r1.corr(test_df['Series2']).plot()
r2.corr(test_df['Series2']).plot()
r3.corr(test_df['Series2']).plot()
This code would produce similar shape of plots for r2.corr().plot() and r3.corr().plot(), but note that the calculation results still different: r2.corr(test_df['Series2']) == r3.corr(test_df['Series2']).
I think for regular time frequency index, you should just stick to r1.
This mainly because the result of two rolling 365 and 365D are different.
For example
sub = test_df.head()
sub['Series2'].rolling(2).sum()
Out[15]:
1990-01-01 NaN
1990-01-02 -0.355230
1990-01-03 0.844281
1990-01-04 2.515529
1990-01-05 1.508412
sub['Series2'].rolling('2D').sum()
Out[16]:
1990-01-01 -0.043692
1990-01-02 -0.355230
1990-01-03 0.844281
1990-01-04 2.515529
1990-01-05 1.508412
Since there are a lot NaN in rolling 365, so the corr of two series in two way are quit different.
I'm trying to predict stock prices using sklearn. I'm new to prediction. I tried the example from sklearn for stock prediction with gaussian hmm. But predict gives states sequence which overlay on the price and it also takes points from given input close price. My question is how to generate next 10 prices?
You will always use the last state to predict the next state, so let's add 10 days worth of inputs by changing the end date to the 23rd:
date2 = datetime.date(2012, 1, 23)
You can double check the rest of the code to make sure I am not actually using future data for the prediction. The rest of these lines can be added to the bottom of the file. First we want to find out what the expected return is for a given state. The model.means_ array has returns, both those were the returns that got us to this state, not the future returns which is what you want. To get the future returns, we consider the probability of going to any one of the 5 states, and what the return of those states is. We get the probability of going to any particular state from the model.transmat_ matrix, the for the return of each state we use the model.means_ values. We take the dot product to get the expected return for a particular state. Then we remove the volume data (you can leave it in if you want, but you seemed to be most interested in future prices).
expected_returns_and_volumes = np.dot(model.transmat_, model.means_)
returns_and_volumes_columnwise = zip(*expected_returns_and_volumes)
returns = returns_and_volumes_columnwise[0]
If you print the value for returns[0], you'll see the expected return in dollars for state 0, returns[1] for state 1 etc. Now, given a day and a state, we want to predict the price for tomorrow. You said 10 days so let's use that for lastN.
predicted_prices = []
lastN = 10
for idx in xrange(lastN):
state = hidden_states[-lastN+idx]
current_price = quotes[-lastN+idx][2]
current_date = datetime.date.fromordinal(dates[-lastN+idx])
predicted_date = current_date + datetime.timedelta(days=1)
predicted_prices.append((predicted_date, current_price + returns[state]))
print(predicted_prices)
If you were running this in "production" you would set date2 to the last date you have and then lastN would be 1. Note that I don't take into account weekends for the predicted_date.
This is a fun exercise but you probably wouldn't run this in production, hence the quotes. First, the time series is the raw price; this should really be percentage returns or log returns. Plus there is no justification for picking 5 states for the HMM, or that a HMM is even good for this kinda problem, which I doubt. They probably just picked it as an example. I think the other sklearn example using PCA is much more interesting.