I am trying to use keras-rl2 DQNAgent to solve the taxi problem in open AI Gym.
For a quick refresh, please find it in Gym-Documentation, thank you!
https://www.gymlibrary.dev/environments/toy_text/taxi/
Here are my process:
0.Open the Taxi-v3 environment from gym
1.Build the deep learning model by keras Sequential API with Embedding and Dense layers
2.Import the Epsilon Greedy policy and Sequential Memory deque from keras-rl2's rl
3.input the model, policy, and the memory in to rl.agent.DQNAgent and compile the model
But when i fit the model(agent) the error pops up:
Training for 1000000 steps ...
Interval 1 (0 steps performed)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-180-908ee27d8389> in <module>
1 agent.compile(Adam(lr=0.001),metrics=['mae'])
----> 2 agent.fit(env, nb_steps=1000000, visualize=False, verbose=1, nb_max_episode_steps=99, log_interval=100000)
/usr/local/lib/python3.8/dist-packages/rl/core.py in fit(self, env, nb_steps, action_repetition, callbacks, verbose, visualize, nb_max_start_steps, start_step_policy, log_interval, nb_max_episode_steps)
179 observation, r, done, info = self.processor.process_step(observation, r, done, info)
180 for key, value in info.items():
--> 181 if not np.isreal(value):
182 continue
183 if key not in accumulated_info:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
I tried to run the code to Cart Pole problem there's no error came out. I am wondering if the states in taxi problem is just a scalar (500), not like cart-pole has a state of an array with 4 elements? Please help or a little advise will help a lot, also if you can help me to extend the steps more than 200 is better!!(env._max_episode_steps=5000)
#import environment and visualization
import gym
from gym import wrappers
!pip install gym[classic_control]
#import Deep Learning api
import tensorflow as tf
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Dense, Flatten, Input, Embedding,Reshape
from tensorflow.keras.optimizers import Adam
#import rl agent library
!pip install gym
!pip install keras
!pip install keras-rl2
#data manipulation
import numpy as np
import pandas as pd
import random
#0
env = gym.make('Taxi-v3')
env.reset()
actions=env.action_space.n
states=env.observation_space.n
#1
def build_model(states,actions):
model=Sequential()
model.add(Embedding(states,10, input_length=1))
model.add(Reshape((10,)))
model.add(Dense(32,activation='relu'))
model.add(Dense(32,activation='relu'))
model.add(Dense(actions,activation='linear'))
return model
#2
import rl
from rl.agents import DQNAgent
from rl.policy import EpsGreedyQPolicy
from rl.memory import SequentialMemory
policy=EpsGreedyQPolicy()
memory=SequentialMemory(limit=100000,window_length=1)
#3
agent=DQNAgent(model=model1,memory=memory,policy=policy,nb_actions=actions,nb_steps_warmup=500, target_model_update=1e-2)
agent.compile(Adam(lr=0.001),metrics=['mae'])
agent.fit(env, nb_steps=1000000, visualize=False, verbose=1, nb_max_episode_steps=99,
This ValueError comes from the way Keras RL handles the info returned by the environment. As you can see on the line https://github.com/keras-rl/keras-rl/blob/v0.4.2/rl/core.py#L181, it loops on each item of the info map and runs np.isreal(value).
And quoting the Taxi documentation for gym:
In v0.25.0, info["action_mask"] contains a np.ndarray for each of the action specifying if the action will change the state.
You can run gym.__version__ to confirm that you have a version greater or equal to 0.25.0.
To leverage the current Keras RL library (up to 0.4.2), you should install a gym version less than 0.25.0. Additionally, you can submit a PR to keras-rl to handle np.ndarray values without error.
Related
I am doing everything only on one jupyter notebook file.
I am trying to predict new store description by their category using logistic regression classification model and count vectorizer
All the code below are in SEQUENCE be it used or unused code
Below is my code:
from sklearn.feature_extraction.text import CountVectorizer
cv=CountVectorizer(stop_words='english', ngram_range=(1,1))
X_train_cv=cv.fit_transform(X_train.values.astype('str'))
X_test_cv=cv.transform(X_test.values.astype('str'))
from sklearn.linear_model import LogisticRegression
lr=LogisticRegression(solver='lbfgs')
lr.fit(X_train_cv,y_train)
y_pred_cv=lr.predict(X_test_cv)
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred_cv,target_names=['electronics','fashion','F&B','services']))
#i never use this code below as i am not doing on 2 notebook
import pickle
from datetime import datetime
model_path=['drive','mydrive','I125','models']
time=datetime.now().strfttime("%Y-%m-%d")
filename='lr-{}.pkl'.format(time)
templist=[]
templist.append(filename)
path1=os.sep.join(model_path+templist)
filename='countvectorizer-{}.pkl'.format(time)
templist=[]
templist.append(filename)
path2=os.sep.join(model_path+templist)
with open(path1,'wb')as f1:
pickle.dump(lr,f1)
with open(path2,'wb')as f2:
pickle.dump(cv,f2)
I am trying to predict a new description using the current classifier that i have. I only know how to use current classifier to predict new description if it's for separate notebook.
This is my code that i have to predict for new description:
#i never use this code below as i am not doing on 2 notebook
import os
import pickle
from google.colab import drive
drive.mount('/content/drive')
model_path=['drive','mydrive','I125','models']
filename=['lr-2022-10-10.pk1']
model_path=['drive','mydrive','I125','models']
filename=['countvectoriser-2022-10-10.pk1']
path2=os.sep.join(model_path+filename)
with open(path2,'rb')as f:
trained_cv=pickle.load(f)
path1=os.sep.join(model_path+filename)
with open(path1,'rb') as f:
model=pickle.load(f)
#i used this code below
import re
import string
def preprocess(text):
pattern_alphanumeric="\w*\d\w*"
pattern_punctuation="["+re.escape(string.punctuation)+"]"
text=re.sub(pattern_alphanumeric,'',text)
text=re.sub(pattern_punctuation,'',text).lower()
return text
new_text="This clothes so nice"
new_text_processed=preprocess(new_text)
def encode_text_to_vector(cv,test):
text_vector = cv.transform([text])
return text_vector
new_text_vector=encode_text_to_vector(trained_cv,new_text_processed) <--line with error
print(new_text_vector)
ERror:
trained_cv is undefined. (trained_cv is supposed to be the the saved logistic regression and count vectorizer if i have use different jupyter notebook)
I am a new user for yellowbrick. While implementing a sklearn LogisticRegression API in yellowbrick ClassificationReport, I found some unusual error. I have tried many syntaxes as suggested by yellowbrick official document as well as in most data science community user (medium etc.), but still I am getting the same error. Though I am getting the ClassificationReport but the error is quite annoying.
#Using yellowbrick library
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, roc_auc_score, recall_score, roc_curve, accuracy_score, auc, classification_report, plot_confusion_matrix, plot_roc_curve, precision_score, f1_score
from yellowbrick.classifier import ClassificationReport, discrimination_threshold, classification_report
classes = [0,1]
fig = plt.gcf()
ax = plt.subplot(111)
visualizer = ClassificationReport(log_model,classes=[0,1], size=(400,400),fontsize=15, cmap='GnBu', ax = ax)
ax.grid(False)
plt.title("Classification Report", fontsize=18)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
#visualizer.show() #I even tried this (This also gives me an error like LogisticRegression object has no attribute 'show'
The output I am getting is:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-89-f01efe3e9a6d> in <module>()
16 visualizer.fit(X_train, y_train)
17 visualizer.score(X_test, y_test)
---> 18 visualizer.poof()
2 frames
/usr/local/lib/python3.7/dist-packages/yellowbrick/utils/wrapper.py in __getattr__(self, attr)
40 def __getattr__(self, attr):
41 # proxy to the wrapped object
---> 42 return getattr(self._wrapped, attr)
AttributeError: 'LogisticRegression' object has no attribute 'fig'
Appreciate any suggestion to get rid of this error.
To add, currently I am using following scikit-learn and yellowbrick versions:
print(sklearn.__version__)
print(yellowbrick.__version__)
0.24.2
0.9.1
sklearn version 0.22.2.post1 and yellowbrick version 0.9.1 fix the issue for me. Install these by running:
pip install scikit-learn==0.22.2.post1
and
pip install yellowbrick==0.9.1
I would like to check if model is on CUDA. How to do that?
import torch
import torchvision
model = torchvision.models.resnet18()
model.to('cuda')
Seams that model.is_cuda() is not working.
This code should do it:
import torch
import torchvision
model = torchvision.models.resnet18()
model.to('cuda')
next(model.parameters()).is_cuda
Out:
True
Note there is no is_cuda() method inside nn.Module.
Also note model.to('cuda') is the same as model.cuda() and both are inplace.
On the other hand moving the data.to('cuda') is not inplace and you typically call:
data = data.to('cuda')
to move the data to CUDA.
I want to use my Tensorflow algorithm in an Android app. The Tensorflow Android example starts by downloading a GraphDef that contains the model definition and weights (in a *.pb file). Now this should be from my Scikit Flow algorithm (part of Tensorflow).
At the first glance it seems easy you just have to say classifier.save('model/') but the files saved to that folder are not *.ckpt, *.def and certainly not *.pb. Instead you have to deal with a *.pbtxt and a checkpoint (without ending) file.
I'm stuck there since quite a while. Here a code example to export something:
#imports
import tensorflow as tf
import tensorflow.contrib.learn as skflow
import tensorflow.contrib.learn.python.learn as learn
from sklearn import datasets, metrics
#skflow example
iris = datasets.load_iris()
feature_columns = learn.infer_real_valued_columns_from_input(iris.data)
classifier = learn.LinearClassifier(n_classes=3, feature_columns=feature_columns,model_dir="modeltest")
classifier.fit(iris.data, iris.target, steps=200, batch_size=32)
iris_predictions = list(classifier.predict(iris.data, as_iterable=True))
score = metrics.accuracy_score(iris.target, iris_predictions)
print("Accuracy: %f" % score)
The files you get are:
checkpoint
graph.pbtxt
model.ckpt-1.meta
model.ckpt-1-00000-of-00001
model.ckpt-200.meta
model.ckpt-200-00000-of-00001
Many possible workarounds I found would require having the GraphDef in a variable (don't know how with Scikit Flow). Or a Tensorflow session which doesn't seem to be required using Scikit Flow.
To save as pb file, you need to extract the graph_def from the constructed graph. You can do that as--
from tensorflow.python.framework import tensor_shape, graph_util
from tensorflow.python.platform import gfile
sess = tf.Session()
final_tensor_name = 'results:0' #Replace final_tensor_name with name of the final tensor in your graph
#########Build your graph and train########
## Your tensorflow code to build the graph
###########################################
outpt_filename = 'output_graph.pb'
output_graph_def = sess.graph.as_graph_def()
with gfile.FastGFile(outpt_filename, 'wb') as f:
f.write(output_graph_def.SerializeToString())
If you want to convert your trained variables to constants (to avoid using ckpt files to load the weights), you can use:
output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), [final_tensor_name])
Hope this helps!
This fairly simple example leads to the error below? What is the problem and how can I solve it.
from sklearn import linear_model
from random import randrange
def model(x):
return 2 * x
clf = linear_model.SGDRegressor()
for i in range(20000):
x = randrange(-1000, 1000)
clf.partial_fit([(1, x)], [model(x)])
ValueError: Floating-point under-/overflow occurred at epoch #1. Scaling input data with StandardScaler or MinMaxScaler might help.
Maybe the random generator threw a number that made the matrix singular. You could try the "try-break" command to avoid the for from stopping at each error, skipping the errors instead.
(To see the try command: https://docs.python.org/2/tutorial/errors.html)