Learning neural network and saving the result - python-3.x

I think this is a simple question, but not for me( There is a table in df:
Date X1 X2 Y1
07.02.2019 5 1 1
08.02.2019 6 2 1
09.02.2019 1 3 0
10.02.2019 4 4 1
11.02.2019 1 1 0
12.02.2019 4 2 1
13.02.2019 5 5 1
14.02.2019 6 5 1
15.02.2019 1 1 0
16.02.2019 4 5 1
17.02.2019 1 2 0
18.02.2019 1 1
19.02.2019 2 1
20.02.2019 3 2
21.02.2019 4 14
I need to build a neural network for Y1 from the parameters X1 and X2 and then apply it to the lines with a date greater than 17.02.2019, And save the network prediction result in a separate df2
import pandas as pd
import numpy as np
import re
from sklearn.neural_network import MLPClassifier
df = pd.read_csv("ob.csv", encoding = 'cp1251', sep = ';')
df['Date'] = pd.to_datetime(df['Date'], format='%d.%m.%Y')
startdate = pd.to_datetime('2019-02-17')
X = ['X1', 'X2'] ????
y = ['Y1'] ????
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(x, y)
clf.predict(???????) ????? df2 = ????
Where ???? - I do not know how to set the conditions correctly

import pandas as pd
import numpy as np
import re
from sklearn.neural_network import MLPClassifier
df = pd.read_csv("ob.csv", encoding = 'cp1251', sep = ';')
df['Date'] = pd.to_datetime(df['Date'], format='%d.%m.%Y')
startdate = pd.to_datetime('2019-02-17')
train = df[df['Date'] <= '2019-02-17']
test = df[df['Date'] > '2019-02-17']
X_train = train[['X1', 'X2']]
y_train = train[['Y1']]
X_test = test[['X1', 'X2']]
y_test = test[['Y1']]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X_train, y_train)
df2 = pd.DataFrame(clf.predict(X_test))
df2.to_csv('prediction.csv')

Related

Argument must be a string or a number issue, Not 'Type' - Pyspark

Update:
So i have been looking into the issue, the problem is with scikit-multiflow datastream. in last quarter of code stream_clf.partial_fit(X,y, classes=stream.target_values) here the class valuefor stream.target_values should a number or string, but the method is returning (dtype). When i print or loop stream.target_values i get this:
I have tried to do conversion etc. but still of no use. can someone please help here ?
Initial Problem
I am running a code (took inspiration from here). It works perfectly alright when used vanilla python environment.
But if i run this code after certain modification in Apache Spark using Pyspark , i get the following error
TypeError: int() argument must be a string, a bytes-like object or a number, not 'type'
I have tried every possibile way to trace the issue but everything looks alright. The error arises from the last line of the code where hoefding tree is called for prediction. It expects an ndarray and the type of X variable is also ndarray. I am not sure what is trigerring the issue. Can some one please help or direct me to right trace?
complete stack of error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-52-1310132c88db> in <module>
30 D3_win.addInstance(X,y)
31 xx = np.array(X,dtype='float64')
---> 32 y_hat = stream_clf.predict(xx)
33
34
~/conceptDrift/projectTest/lib/python3.5/site-packages/skmultiflow/trees/hoeffding_tree.py in predict(self, X)
1068 r, _ = get_dimensions(X)
1069 predictions = []
-> 1070 y_proba = self.predict_proba(X)
1071 for i in range(r):
1072 index = np.argmax(y_proba[i])
~/conceptDrift/projectTest/lib/python3.5/site-packages/skmultiflow/trees/hoeffding_tree.py in predict_proba(self, X)
1099 votes = normalize_values_in_dict(votes, inplace=False)
1100 if self.classes is not None:
-> 1101 y_proba = np.zeros(int(max(self.classes)) + 1)
1102 else:
1103 y_proba = np.zeros(int(max(votes.keys())) + 1)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'type'
Code
import findspark
findspark.init()
import pyspark as ps
import warnings
from pyspark.sql import functions as fn
import sys
from pyspark import SparkContext,SparkConf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score as AUC
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from skmultiflow.trees.hoeffding_tree import HoeffdingTree
from skmultiflow.data.data_stream import DataStream
import time
def drift_detector(S,T,threshold = 0.75):
T = pd.DataFrame(T)
#print(T)
S = pd.DataFrame(S)
# Give slack variable in_target which is 1 for old and 0 for new
T['in_target'] = 0 # in target set
S['in_target'] = 1 # in source set
# Combine source and target with new slack variable
ST = pd.concat( [T, S], ignore_index=True, axis=0)
labels = ST['in_target'].values
ST = ST.drop('in_target', axis=1).values
# You can use any classifier for this step. We advise it to be a simple one as we want to see whether source
# and target differ not to classify them.
clf = LogisticRegression(solver='liblinear')
predictions = np.zeros(labels.shape)
# Divide ST into two equal chunks
# Train LR on a chunk and classify the other chunk
# Calculate AUC for original labels (in_target) and predicted ones
skf = StratifiedKFold(n_splits=2, shuffle=True)
for train_idx, test_idx in skf.split(ST, labels):
X_train, X_test = ST[train_idx], ST[test_idx]
y_train, y_test = labels[train_idx], labels[test_idx]
clf.fit(X_train, y_train)
probs = clf.predict_proba(X_test)[:, 1]
predictions[test_idx] = probs
auc_score = AUC(labels, predictions)
print(auc_score)
# Signal drift if AUC is larger than the threshold
if auc_score > threshold:
return True
else:
return False
class D3():
def __init__(self, w, rho, dim, auc):
self.size = int(w*(1+rho))
self.win_data = np.zeros((self.size,dim))
self.win_label = np.zeros(self.size)
self.w = w
self.rho = rho
self.dim = dim
self.auc = auc
self.drift_count = 0
self.window_index = 0
def addInstance(self,X,y):
if(self.isEmpty()):
self.win_data[self.window_index] = X
self.win_label[self.window_index] = y
self.window_index = self.window_index + 1
else:
print("Error: Buffer is full!")
def isEmpty(self):
return self.window_index < self.size
def driftCheck(self):
if drift_detector(self.win_data[:self.w], self.win_data[self.w:self.size], auc): #returns true if drift is detected
self.window_index = int(self.w * self.rho)
self.win_data = np.roll(self.win_data, -1*self.w, axis=0)
self.win_label = np.roll(self.win_label, -1*self.w, axis=0)
self.drift_count = self.drift_count + 1
return True
else:
self.window_index = self.w
self.win_data = np.roll(self.win_data, -1*(int(self.w*self.rho)), axis=0)
self.win_label =np.roll(self.win_label, -1*(int(self.w*self.rho)), axis=0)
return False
def getCurrentData(self):
return self.win_data[:self.window_index]
def getCurrentLabels(self):
return self.win_label[:self.window_index]
def select_data(x):
x = "/user/hadoop1/tellus/sea_1.csv"
peopleDF = spark.read.csv(x, header= True)
df = peopleDF.toPandas()
scaler = MinMaxScaler()
df.iloc[:,0:df.shape[1]-1] = scaler.fit_transform(df.iloc[:,0:df.shape[1]-1])
return df
def check_true(y,y_hat):
if(y==y_hat):
return 1
else:
return 0
df = select_data("/user/hadoop1/tellus/sea_1.csv")
stream = DataStream(df)
stream.prepare_for_use()
stream_clf = HoeffdingTree()
w = int(2000)
rho = float(0.4)
auc = float(0.60)
# In[ ]:
D3_win = D3(w,rho,stream.n_features,auc)
stream_acc = []
stream_record = []
stream_true= 0
i=0
start = time.time()
X,y = stream.next_sample(int(w*rho))
stream_clf.partial_fit(X,y, classes=stream.target_values)
while(stream.has_more_samples()):
X,y = stream.next_sample()
if D3_win.isEmpty():
D3_win.addInstance(X,y)
y_hat = stream_clf.predict(X)
Problem was with select_data() function, data type of variables was being changed during the execution. This issue is fixed now.

I am using Python to implement linear regression on some dataset, but on this step I am continously getting this error

I wrote this linear regression code and now it is giving me an error:
at def iterate_weights function.error = index 200 is out of bounds for
axis 0 with size 200
I don't know what is wrong. Also when I am uploading my weights they are coming the same as above which I chose at random. I am using Jupyter notebook.
Are there any mistakes?
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
#importing dataset
data = pd.read_csv('F:\WOC\linearreg.csv')
print(data.shape)
data.head()
data_arr = np.genfromtxt("F:\WOC\linearreg.csv", delimiter=",", skip_header=1)
print(data_arr)
# In[3]:
#collecting x and y
x_train = data_arr[:,1:4]
y_train = data_arr[:,4:5]
print(x_train)
print(y_train)
# In[4]:
weights_shape = y_train.shape
print(weights_shape)
r,c = x_train.shape
print(r,c)
w = np.random.randn(c,1)
w_num = len(w)
print(w)
# In[5]:
h = np.dot(x_train,w)
def cost_function():
print(h)
j = (1/2*r)*((h-y_train)**2)
print('j',j)
cost_function()
# In[6]:
def iterate_weights():
L=0.01
iterations = 1000
for iterations_proceed in range(1,1001):
for i in range(w_num):
for m in range(1,201):
w[i,0] = w[i,0]-L*((1/r)*(sum(h-y_train)*(x_train[m,i])))
print(w)
iterate_weights()
# In[7]:
h = np.dot(x_train,w)
def cost_function1():
j = np.sum((1/2*r)*((h-y_train)**2))
print(j)

cannot convert string into float

Sales Discount Profit Product ID
0 0.050090 0.000000 0.262335 FUR-ADV-10000002
1 0.110793 0.000000 0.260662 FUR-ADV-10000108
2 0.309561 0.864121 0.241432 FUR-ADV-10000183
3 0.039217 0.591474 0.260687 FUR-ADV-10000188
4 0.070205 0.000000 0.263628 FUR-ADV-10000190
5 0.697873 0.000000 0.281162 FUR-ADV-10000571
6 0.064918 0.000000 0.261285 FUR-ADV-10000600
7 0.091950 0.000000 0.262946 FUR-ADV-10000847
8 0.056013 0.318384 0.257952 FUR-ADV-10001283
9 0.304472 0.318384 0.265739 FUR-ADV-10001440
10 0.046234 0.318384 0.261058 FUR-ADV-10001659
Am using K elbow method to find the right number of cluster
Using the elbow method to find the optimal number of clusters
import matplotlib.pyplot as plt
def kelbow(final_df,k):
from sklearn.cluster import KMeans
x = []
for i in range(1,k):
kmeans = KMeans(n_clusters = i)
kmeans.fit(final_df)
x.append(kmeans.inertia_)
plt.plot(range(1,k), 30)
plt.title('The elbow method')
plt.xlabel('The number of clusters')
plt.ylabel('WCSS')
plt.show()
return x
Returning the function,
kelbow(final_df,30),
But the code is throwing the error as,
ValueError: could not convert string to float: 'TEC-STA-10004927'
How can i find the clusters?
Make dummy variables.
final_df = pd.get_dummies(final_df, columns=['ProductID'], dtype=('int64'))
final_df = final_df.drop(['ProductID'], axis=1)
This should work for you:
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
def kelbow(df, k):
x = []
final_df = pd.get_dummies(df, columns=df.select_dtypes(['object']).columns)
for i in range(1,k):
kmeans = KMeans(n_clusters = i)
kmeans.fit(final_df)
x.append(kmeans.inertia_)
plt.plot(range(1,k), 30)
plt.title('The elbow method')
plt.xlabel('The number of clusters')
plt.ylabel('WCSS')
plt.show()
return x

Trouble Creating Testing/Training Features To Oversample the Minority

I am trying to recreate a tutorial made by Nick Becker. It is located at https://beckernick.github.io/oversampling-modeling/
The code he has posted works when you copy and paste it in to Jupyter Notebook.
I am trying to recreate this with a different data set that is also highly imbalanced. It is a Airbnb data set provided by Inside Airbnb which I have manipulated and reuploaded here: https://drive.google.com/file/d/0B4EEyCnbIf1fLTd2UU5SWVNxV29oNHVkc3ZyY2JId3UyRWtv/view?usp=drivesdk
I have created a notebook in which I have dropped rows with null values, averaged the review score and made 1,2,3 = to 1 or Negative and 4,5 = 0 or Positive.
I then followed the exact steps as were provided in Nick Beckers model and when I get to the the "Creating the Training and Test Sets" portion I get an error.
**** I have added an additional question toward the end because the error was solved in the comments****
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-1c632a59b870> in <module>
1 training_features, test_features, \
----> 2 training_target, test_target, = train_test_split(price_relevant_enconded.drop(['average_review_score'], axis=1)
KeyError: "['average_review_score'] not found in axis"
The above is a shortened version of the full error message.
I did notice that in Nick's code even though he sets "bad_loans" in his model_variables which he then creates dummies for. When you actually look at the "price_relevant_encoded" dataframe there are actually no dummies created for "bad_loans". My equivelent to "bad_loans" is "average_review_score" and there are dummies created for that. I believe that is my problem. The bad part for me is that I do not know how to get around it. My end goal is to be able to get a more realistic prediction model for ratings depending on property type room type and neighborhood.
This is the code I have so far:
%matplotlib inline
import pandas as pd
import numpy as np
import nltk
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
import warnings
import tensorflow as tf
import tensorflow_hub as hub
import bert
import imblearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from scipy import stats
plt.style.use('seaborn')
warnings.filterwarnings(action='ignore')
output_dir = 'modelOutput'
airbnbdata = pd.read_excel('Z:\\Business\\AA Project\\listings_cleaned_v1.xlsm')
dfclean = airbnbdata
dfclean.iloc[0]
#drop rows with nulls in columns
dfclean = dfclean.dropna(subset=['id'])
dfclean = dfclean.dropna(subset=['listing_url'])
dfclean = dfclean.dropna(subset=['name'])
dfclean = dfclean.dropna(subset=['summary'])
dfclean = dfclean.dropna(subset=['space'])
dfclean = dfclean.dropna(subset=['description'])
dfclean = dfclean.dropna(subset=['host_id'])
dfclean = dfclean.dropna(subset=['host_name'])
dfclean = dfclean.dropna(subset=['host_listings_count'])
dfclean = dfclean.dropna(subset=['neighbourhood_cleansed'])
dfclean = dfclean.dropna(subset=['city'])
dfclean = dfclean.dropna(subset=['state'])
dfclean = dfclean.dropna(subset=['zipcode'])
dfclean = dfclean.dropna(subset=['country'])
dfclean = dfclean.dropna(subset=['latitude'])
dfclean = dfclean.dropna(subset=['longitude'])
dfclean = dfclean.dropna(subset=['property_type'])
dfclean = dfclean.dropna(subset=['room_type'])
dfclean = dfclean.dropna(subset=['price'])
dfclean = dfclean.dropna(subset=['number_of_reviews'])
dfclean = dfclean.dropna(subset=['review_scores_rating'])
dfclean = dfclean.dropna(subset=['average_review_score'])
dfclean = dfclean.dropna(subset=['reviews_per_month'])
#round score rating
dfclean['average_review_score'] = dfclean['average_review_score']/2
dfclean.average_review_score = dfclean.average_review_score.round()
dfclean.neighbourhood_cleansed=dfclean.neighbourhood_cleansed.replace(' ', '_', regex=True)
#pd.Series(' '.join(dfclean.neighbourhood_cleansed).split()).value_counts()[:20]
dfclean.average_review_score[dfclean['average_review_score']== 1] = '1'
dfclean.average_review_score[dfclean['average_review_score']== 2] = '1'
dfclean.average_review_score[dfclean['average_review_score']== 3] = '1'
dfclean.average_review_score[dfclean['average_review_score']== 4] = '0'
dfclean.average_review_score[dfclean['average_review_score']== 5] = '0'
dfclean['average_review_score'].value_counts()/dfclean['average_review_score'].count()
dfclean.average_review_score.value_counts()
model_variables = ['neighbourhood_cleansed', 'property_type','room_type','average_review_score']
price_data_relevent = dfclean[model_variables]
price_relevant_enconded = pd.get_dummies(price_data_relevent)
training_features, test_features, \
training_target, test_target, = train_test_split(price_relevant_enconded.drop(['average_review_score'], axis=1),
price_relevant_enconded['average_review_score'],
test_size = .15,
random_state=12)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-21-1c632a59b870> in <module>
1 training_features, test_features, \
----> 2 training_target, test_target, = train_test_split(price_relevant_enconded.drop(['average_review_score'], axis=1),
3 price_relevant_enconded['average_review_score'],
4 test_size = .15,
5 random_state=12)
~\Anaconda3\lib\site-packages\pandas\core\frame.py in drop(self, labels, axis, index, columns, level, inplace, errors)
4115 level=level,
4116 inplace=inplace,
-> 4117 errors=errors,
4118 )
4119
~\Anaconda3\lib\site-packages\pandas\core\generic.py in drop(self, labels, axis, index, columns, level, inplace, errors)
3912 for axis, labels in axes.items():
3913 if labels is not None:
-> 3914 obj = obj._drop_axis(labels, axis, level=level, errors=errors)
3915
3916 if inplace:
~\Anaconda3\lib\site-packages\pandas\core\generic.py in _drop_axis(self, labels, axis, level, errors)
3944 new_axis = axis.drop(labels, level=level, errors=errors)
3945 else:
-> 3946 new_axis = axis.drop(labels, errors=errors)
3947 result = self.reindex(**{axis_name: new_axis})
3948
~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in drop(self, labels, errors)
5338 if mask.any():
5339 if errors != "ignore":
-> 5340 raise KeyError("{} not found in axis".format(labels[mask]))
5341 indexer = indexer[~mask]
5342 return self.delete(indexer)
KeyError: "['average_review_score'] not found in axis"
The output for
for col in price_relevant_enconded.columns:
print(col)
neighbourhood_cleansed_Acton
neighbourhood_cleansed_Adams-Normandie
neighbourhood_cleansed_Agoura_Hills
neighbourhood_cleansed_Agua_Dulce
neighbourhood_cleansed_Alhambra
neighbourhood_cleansed_Alondra_Park
neighbourhood_cleansed_Altadena
neighbourhood_cleansed_Angeles_Crest
neighbourhood_cleansed_Arcadia
neighbourhood_cleansed_Arleta
neighbourhood_cleansed_Arlington_Heights
neighbourhood_cleansed_Artesia
neighbourhood_cleansed_Athens
neighbourhood_cleansed_Atwater_Village
neighbourhood_cleansed_Avalon
neighbourhood_cleansed_Avocado_Heights
neighbourhood_cleansed_Azusa
neighbourhood_cleansed_Baldwin_Hills/Crenshaw
neighbourhood_cleansed_Baldwin_Park
neighbourhood_cleansed_Bel-Air
neighbourhood_cleansed_Bell
neighbourhood_cleansed_Bell_Gardens
neighbourhood_cleansed_Bellflower
neighbourhood_cleansed_Beverly_Crest
neighbourhood_cleansed_Beverly_Grove
neighbourhood_cleansed_Beverly_Hills
neighbourhood_cleansed_Beverlywood
neighbourhood_cleansed_Boyle_Heights
neighbourhood_cleansed_Bradbury
neighbourhood_cleansed_Brentwood
neighbourhood_cleansed_Broadway-Manchester
neighbourhood_cleansed_Burbank
neighbourhood_cleansed_Calabasas
neighbourhood_cleansed_Canoga_Park
neighbourhood_cleansed_Carson
neighbourhood_cleansed_Carthay
neighbourhood_cleansed_Castaic
neighbourhood_cleansed_Castaic_Canyons
neighbourhood_cleansed_Central-Alameda
neighbourhood_cleansed_Century_City
neighbourhood_cleansed_Cerritos
neighbourhood_cleansed_Charter_Oak
neighbourhood_cleansed_Chatsworth
neighbourhood_cleansed_Chesterfield_Square
neighbourhood_cleansed_Cheviot_Hills
neighbourhood_cleansed_Chinatown
neighbourhood_cleansed_Citrus
neighbourhood_cleansed_Claremont
neighbourhood_cleansed_Commerce
neighbourhood_cleansed_Compton
neighbourhood_cleansed_Covina
neighbourhood_cleansed_Culver_City
neighbourhood_cleansed_Cypress_Park
neighbourhood_cleansed_Del_Aire
neighbourhood_cleansed_Del_Rey
neighbourhood_cleansed_Desert_View_Highlands
neighbourhood_cleansed_Diamond_Bar
neighbourhood_cleansed_Downey
neighbourhood_cleansed_Downtown
neighbourhood_cleansed_Duarte
neighbourhood_cleansed_Eagle_Rock
neighbourhood_cleansed_East_Hollywood
neighbourhood_cleansed_East_La_Mirada
neighbourhood_cleansed_East_Los_Angeles
neighbourhood_cleansed_East_Pasadena
neighbourhood_cleansed_East_San_Gabriel
neighbourhood_cleansed_Echo_Park
neighbourhood_cleansed_El_Monte
neighbourhood_cleansed_El_Segundo
neighbourhood_cleansed_El_Sereno
neighbourhood_cleansed_Elysian_Park
neighbourhood_cleansed_Elysian_Valley
neighbourhood_cleansed_Encino
neighbourhood_cleansed_Exposition_Park
neighbourhood_cleansed_Fairfax
neighbourhood_cleansed_Florence
neighbourhood_cleansed_Florence-Firestone
neighbourhood_cleansed_Gardena
neighbourhood_cleansed_Glassell_Park
neighbourhood_cleansed_Glendale
neighbourhood_cleansed_Glendora
neighbourhood_cleansed_Gramercy_Park
neighbourhood_cleansed_Granada_Hills
neighbourhood_cleansed_Green_Meadows
neighbourhood_cleansed_Green_Valley
neighbourhood_cleansed_Griffith_Park
neighbourhood_cleansed_Hacienda_Heights
neighbourhood_cleansed_Hancock_Park
neighbourhood_cleansed_Harbor_City
neighbourhood_cleansed_Harbor_Gateway
neighbourhood_cleansed_Harvard_Heights
neighbourhood_cleansed_Harvard_Park
neighbourhood_cleansed_Hasley_Canyon
neighbourhood_cleansed_Hawaiian_Gardens
neighbourhood_cleansed_Hawthorne
neighbourhood_cleansed_Hermosa_Beach
neighbourhood_cleansed_Highland_Park
neighbourhood_cleansed_Historic_South-Central
neighbourhood_cleansed_Hollywood
neighbourhood_cleansed_Hollywood_Hills
neighbourhood_cleansed_Hollywood_Hills_West
neighbourhood_cleansed_Huntington_Park
neighbourhood_cleansed_Hyde_Park
neighbourhood_cleansed_Industry
neighbourhood_cleansed_Inglewood
neighbourhood_cleansed_Irwindale
neighbourhood_cleansed_Jefferson_Park
neighbourhood_cleansed_Koreatown
neighbourhood_cleansed_La_Cañada_Flintridge
neighbourhood_cleansed_La_Crescenta-Montrose
neighbourhood_cleansed_La_Habra_Heights
neighbourhood_cleansed_La_Mirada
neighbourhood_cleansed_La_Puente
neighbourhood_cleansed_La_Verne
neighbourhood_cleansed_Ladera_Heights
neighbourhood_cleansed_Lake_Balboa
neighbourhood_cleansed_Lake_Hughes
neighbourhood_cleansed_Lake_Los_Angeles
neighbourhood_cleansed_Lake_View_Terrace
neighbourhood_cleansed_Lakewood
neighbourhood_cleansed_Lancaster
neighbourhood_cleansed_Larchmont
neighbourhood_cleansed_Lawndale
neighbourhood_cleansed_Leimert_Park
neighbourhood_cleansed_Lennox
neighbourhood_cleansed_Leona_Valley
neighbourhood_cleansed_Lincoln_Heights
neighbourhood_cleansed_Lomita
neighbourhood_cleansed_Long_Beach
neighbourhood_cleansed_Lopez/Kagel_Canyons
neighbourhood_cleansed_Los_Feliz
neighbourhood_cleansed_Lynwood
neighbourhood_cleansed_Malibu
neighbourhood_cleansed_Manchester_Square
neighbourhood_cleansed_Manhattan_Beach
neighbourhood_cleansed_Mar_Vista
neighbourhood_cleansed_Marina_del_Rey
neighbourhood_cleansed_Mayflower_Village
neighbourhood_cleansed_Maywood
neighbourhood_cleansed_Mid-City
neighbourhood_cleansed_Mid-Wilshire
neighbourhood_cleansed_Mission_Hills
neighbourhood_cleansed_Monrovia
neighbourhood_cleansed_Montebello
neighbourhood_cleansed_Montecito_Heights
neighbourhood_cleansed_Monterey_Park
neighbourhood_cleansed_Mount_Washington
neighbourhood_cleansed_North_El_Monte
neighbourhood_cleansed_North_Hills
neighbourhood_cleansed_North_Hollywood
neighbourhood_cleansed_North_Whittier
neighbourhood_cleansed_Northeast_Antelope_Valley
neighbourhood_cleansed_Northridge
neighbourhood_cleansed_Northwest_Antelope_Valley
neighbourhood_cleansed_Northwest_Palmdale
neighbourhood_cleansed_Norwalk
neighbourhood_cleansed_Pacific_Palisades
neighbourhood_cleansed_Pacoima
neighbourhood_cleansed_Palmdale
neighbourhood_cleansed_Palms
neighbourhood_cleansed_Palos_Verdes_Estates
neighbourhood_cleansed_Panorama_City
neighbourhood_cleansed_Paramount
neighbourhood_cleansed_Pasadena
neighbourhood_cleansed_Pico-Robertson
neighbourhood_cleansed_Pico-Union
neighbourhood_cleansed_Pico_Rivera
neighbourhood_cleansed_Playa_Vista
neighbourhood_cleansed_Playa_del_Rey
neighbourhood_cleansed_Pomona
neighbourhood_cleansed_Porter_Ranch
neighbourhood_cleansed_Quartz_Hill
neighbourhood_cleansed_Ramona
neighbourhood_cleansed_Rancho_Dominguez
neighbourhood_cleansed_Rancho_Palos_Verdes
neighbourhood_cleansed_Rancho_Park
neighbourhood_cleansed_Redondo_Beach
neighbourhood_cleansed_Reseda
neighbourhood_cleansed_Ridge_Route
neighbourhood_cleansed_Rolling_Hills
neighbourhood_cleansed_Rolling_Hills_Estates
neighbourhood_cleansed_Rosemead
neighbourhood_cleansed_Rowland_Heights
neighbourhood_cleansed_San_Dimas
neighbourhood_cleansed_San_Fernando
neighbourhood_cleansed_San_Gabriel
neighbourhood_cleansed_San_Marino
neighbourhood_cleansed_San_Pasqual
neighbourhood_cleansed_San_Pedro
neighbourhood_cleansed_Santa_Clarita
neighbourhood_cleansed_Santa_Fe_Springs
neighbourhood_cleansed_Santa_Monica
neighbourhood_cleansed_Sawtelle
neighbourhood_cleansed_Sepulveda_Basin
neighbourhood_cleansed_Shadow_Hills
neighbourhood_cleansed_Sherman_Oaks
neighbourhood_cleansed_Sierra_Madre
neighbourhood_cleansed_Signal_Hill
neighbourhood_cleansed_Silver_Lake
neighbourhood_cleansed_South_El_Monte
neighbourhood_cleansed_South_Gate
neighbourhood_cleansed_South_Park
neighbourhood_cleansed_South_Pasadena
neighbourhood_cleansed_South_San_Gabriel
neighbourhood_cleansed_South_San_Jose_Hills
neighbourhood_cleansed_South_Whittier
neighbourhood_cleansed_Southeast_Antelope_Valley
neighbourhood_cleansed_Stevenson_Ranch
neighbourhood_cleansed_Studio_City
neighbourhood_cleansed_Sun_Valley
neighbourhood_cleansed_Sun_Village
neighbourhood_cleansed_Sunland
neighbourhood_cleansed_Sylmar
neighbourhood_cleansed_Tarzana
neighbourhood_cleansed_Temple_City
neighbourhood_cleansed_Toluca_Lake
neighbourhood_cleansed_Topanga
neighbourhood_cleansed_Torrance
neighbourhood_cleansed_Tujunga
neighbourhood_cleansed_Tujunga_Canyons
neighbourhood_cleansed_Unincorporated_Catalina_Island
neighbourhood_cleansed_Unincorporated_Santa_Monica_Mountains
neighbourhood_cleansed_Unincorporated_Santa_Susana_Mountains
neighbourhood_cleansed_Universal_City
neighbourhood_cleansed_University_Park
neighbourhood_cleansed_Val_Verde
neighbourhood_cleansed_Valinda
neighbourhood_cleansed_Valley_Glen
neighbourhood_cleansed_Valley_Village
neighbourhood_cleansed_Van_Nuys
neighbourhood_cleansed_Venice
neighbourhood_cleansed_Vermont-Slauson
neighbourhood_cleansed_Vermont_Knolls
neighbourhood_cleansed_Vermont_Square
neighbourhood_cleansed_Vermont_Vista
neighbourhood_cleansed_Vernon
neighbourhood_cleansed_Veterans_Administration
neighbourhood_cleansed_View_Park-Windsor_Hills
neighbourhood_cleansed_Vincent
neighbourhood_cleansed_Walnut
neighbourhood_cleansed_Watts
neighbourhood_cleansed_West_Adams
neighbourhood_cleansed_West_Carson
neighbourhood_cleansed_West_Covina
neighbourhood_cleansed_West_Hills
neighbourhood_cleansed_West_Hollywood
neighbourhood_cleansed_West_Los_Angeles
neighbourhood_cleansed_West_Puente_Valley
neighbourhood_cleansed_West_Whittier-Los_Nietos
neighbourhood_cleansed_Westchester
neighbourhood_cleansed_Westlake
neighbourhood_cleansed_Westlake_Village
neighbourhood_cleansed_Westmont
neighbourhood_cleansed_Westwood
neighbourhood_cleansed_Whittier
neighbourhood_cleansed_Willowbrook
neighbourhood_cleansed_Wilmington
neighbourhood_cleansed_Windsor_Square
neighbourhood_cleansed_Winnetka
neighbourhood_cleansed_Woodland_Hills
property_type_Aparthotel
property_type_Apartment
property_type_Barn
property_type_Bed and breakfast
property_type_Boat
property_type_Boutique hotel
property_type_Bungalow
property_type_Bus
property_type_Cabin
property_type_Camper/RV
property_type_Campsite
property_type_Casa particular (Cuba)
property_type_Castle
property_type_Chalet
property_type_Condominium
property_type_Cottage
property_type_Dome house
property_type_Dorm
property_type_Earth house
property_type_Farm stay
property_type_Guest suite
property_type_Guesthouse
property_type_Hostel
property_type_Hotel
property_type_House
property_type_Houseboat
property_type_Hut
property_type_Island
property_type_Loft
property_type_Other
property_type_Resort
property_type_Serviced apartment
property_type_Tent
property_type_Tiny house
property_type_Tipi
property_type_Townhouse
property_type_Train
property_type_Treehouse
property_type_Villa
property_type_Yurt
room_type_Entire home/apt
room_type_Hotel room
room_type_Private room
room_type_Shared room
average_review_score_0
average_review_score_1
The output for
price_relevant_enconded.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 27557 entries, 1 to 35953
Columns: 306 entries, neighbourhood_cleansed_Acton to average_review_score_1
dtypes: uint8(306)
memory usage: 8.3 MB
I continued with the code as follows:
#Create Training and Test Sets
training_features, test_features, \
training_target, test_target, = train_test_split(price_relevant_enconded.drop(['average_review_score'], axis=1),
price_relevant_enconded['average_review_score'],
test_size = .15,
random_state=12)
#Oversample minority class on training data.
x_train, x_val, y_train, y_val = train_test_split(training_features, training_target,
test_size = .1,
random_state=12)
sm = SMOTE(random_state=12, ratio = 1.0)
x_train_res, y_train_res = sm.fit_sample(x_train, y_train)
clf_rf = RandomForestClassifier(n_estimators=25, random_state=12)
clf_rf.fit(x_train_res, y_train_res)
print('Validation Results')
print('Mean Accuracy:',clf_rf.score(x_val, y_val))
print('Recall:',recall_score(y_val, clf_rf.predict(x_val)))
print('\nTest Results')
print('Mean Accuracy:',clf_rf.score(test_features, test_target))
print('Recall:',recall_score(test_target, clf_rf.predict(test_features)))
Validation Results
Mean Accuracy: 0.9709773794280837
Recall: 0.0625
Test Results
Mean Accuracy: 0.9775036284470247
Recall: 0.03225806451612903
Does anyone have any ideas on how I get better optimize my model or make changes to make more accurate predictions from this data?

Index error index 14238 is out of bounds for axis 0 with size 2

%pylab inline
import numpy as np
import pandas as pd
import random
import time
import scipy
import sklearn.feature_extraction
import pickle
from sklearn.cross_validation import StratifiedKFold
from sklearn.svm import LinearSVC
from sklearn.externals import joblib
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
bedsizes = {'None':0,
'Rest All':1}
invbedsizes = {v: k for k, v in bedsizes.items()}
model = joblib.load('model_bed_size.pkl')
vocab = pickle.load(open('dictionary', 'rb'))
var=pd.read_csv('Train_variables.csv')
dtest = pd.read_csv('/home/ubuntu/test_null_new.csv', usecols= ("Bed_size","title","short_description","long_description","primary_shelf.all_paths_str","attributes.all_shelves.0","attributes.all_shelves.1","attributes.all_shelves.2","attributes.all_shelves.3","attributes.all_shelves.4","attributes.type.0","attributes.type.1","attributes.type.2","item_id","last_updated_at"),encoding='ISO-8859-1')
lentest = len(dtest)
vocab=vocab["Vocabulary"].to_dict()
Xall = []
i=1
for col in var['Variable']:
vectorizer = CountVectorizer(min_df=1, vocabulary=(vocab[i]), token_pattern = '\\b\\w+\\b')
Xall.append(vectorizer.transform(dtest[col].astype(str)))
j=i
i=j+1
print (col, 'Done', shape(Xall[-1]))
Xspall = scipy.sparse.hstack(Xall)
X_test_final = scipy.sparse.csr_matrix(Xspall)
print (shape(X_test_final))
ypred = model.decision_function(X_test_final)
ypredc = model.classes_[np.argmax(ypred, axis = 0)]
ypredcon = (np.max(ypred, axis = 1) + 2.) / 8.
ypredcon[ypredcon < 0.] = 0 .
ypredcon[ypredcon > 1.] = 1.
dfinal = pd.DataFrame()
dfinal['item_id '] = dtest['item_id']
dfinal['Predictions'] = ypredc
dfinal['Predictions'].replace(invbedsizes, inplace = True)
dfinal['confidence_score'] = ypredcon
The above code is giving an Index error saying that index 14328 is out of bounds for axis 0 and size 2.
The error is coming at this line
ypredc = model.classes_[np.argmax(ypred, axis = 0)]
Can anyone help me on this?
Without knowing much about the variables in your code, the error indicates that at
ypred = model.decision_function(X_test_final)
ypredc = model.classes_[np.argmax(ypred, axis = 0)]
error: index 14328 is out of bounds for axis 0 and size 2
model.classes_ is 1 or more dimensions, and the first is size 2, in other words 2 rows/classes, and possibly many columns.
ypred is probably quite large, and np.argmax(ypred...) is the index of its largest values (along axis 0), i.e. 14328.
Maye the correct use is model.classes_[:, np.argmax...].
You need to look at the shape of ypred, andmodel.classes_`, and possibly other variables in this area.

Resources