Using custom data instead of MNIST for GAN - keras

I'm trying to run the code about GAN in this link with my own dataset in Colab. For this, I need to load my own dataset like in the next line
(X_train, _), (_, _) = mnist.load_data()
I know this question has been asked before in here. But when I tried the solution in csteel's comment, I am getting the error
TypeError: unsupported operand type(s) for /: 'DirectoryIterator' and
'float'.
As I understand it this solution allows me to navigate in the directory I want. However, I am getting the error I mentioned. I didn't understand exactly what to do.

Related

GluonCV, action recognition tutorial error

I've been trying to run the GluonCV tutorial for action recognition.
I didn't modify anything, but I'm getting an error at the very beginning of the script, when applying the transformation function to the image.
The error is:
AttributeError: 'list' object has no attribute 'shape'
To try and solve it, I wanted to replace the list with a single image, so I tried:
img = transform_fn(img.asnumpy())
plt.imshow(np.transpose(img, (1,2,0)))
plt.show()
but in this case, I get another error:
TypeError: Image data of dtype object cannot be converted to float
Any idea on how to fix it?
Thanks!
It seems I had to update gluoncv to a later version.
A little weird, because I installed it following the instruction on the tutorial page, but it works after the update.

"numpy.ndarray' object has no attribute 'get_support" error message after running SelectKBest in Scikit Learn

I met a question related to this old one: The easiest way for getting feature names after running SelectKBest in Scikit Learn
When trying to use "get_support()" to get the selected features, I got the error message:
numpy.ndarray' object has no attribute 'get_support
I would greatly appreciate your kind help!
Jeff
Without doing fitting you cannot get support. You need to do the fitting so that the selector can analyze the data, and then call get_support() on the selector, not the output of fit_transform()
Currently you are doing something like:
selector = SelectKBest()
#fit_transform returns the data after selecting the best features
new_data = selector.fit_transform(old_data, labels)
#so you are trying to access get_support() on new data, which is not possible
new_data.get_support()
After you call fit() or fit_transform(), do this:
# get_support is a method of SelectKBest class
selector.get_support()
I think I found out the reason why I got the errors. I used "get_support()" on the results after fit() or fit_transform(), which led to the error message.
I should have used the "get_support()" on the selector itself (but still need to use selector to do fit() or fit_transform() first).
Thanks!
Jeff

Tflearns .fit() method with numpy.ndarrays causing TypeError

So I get this error TypeError: unhashable type: 'numpy.ndarray' when executing the code below. I searched through Stackoverflow but haven't found a way to fix my problem. The goal is to classify digits via the mnist dataset. The error is in the modell.fit() method (from tflearn). I can attach the full error message of the error if needed. I tried it also with the method were you put the x and y lables in an dictionary and train it with this but it raised another error message. (Note I excluded my predict function in this code).
Code:
import tflearn.datasets.mnist as mnist
x,y,X,Y=mnist.load_data(one_hot=True)
x=x.reshape([-1,28,28,1])
X=X.reshape([-1,28,28,1])
import tflearn
class Neural_Network():
def __init__(self,x,y):
self.x=x
self.y=y
self.epochs=60000
def main(self):
cnn=tflearn.layers.core.input_data(shape=[None,28,28,1],name="input_layer")
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.conv.conv_2d(cnn,32,2, activation="relu")
cnn=tflearn.layers.conv.max_pool_2d(cnn,2)
cnn=tflearn.layers.core.flatten(cnn)
cnn=tflearn.layers.core.fully_connected(cnn,1000,activation="relu")
cnn=tflearn.layers.core.dropout(cnn,0.85)
cnn=tflearn.layers.core.fully_connected(cnn,10,activation="softmax")
cnn=tflearn.layers.estimator.regression(cnn,learning_rate=0.001)
modell=tflearn.DNN(cnn)
modell.fit(self.x,self.y)
modell.save("mnist.modell")
nn=Neural_Network(x,y)
nn.main()
nn.predict(X[1])
print("Label for prediction:",Y[1])
So the problem fixed it self. I only restarted my Jupiter-Notebook and everything worked fine. But with a few execptions: 1. I have to restart the Kernel everytime I want to retrain the net, 2. I get another error while I try to load the saved modell, so I can't work on (the error is NotFoundError: Key Conv2D_2/W not found in checkpoint). I will ask another question for this problem. Conclusion: Try to relod your Jupiter Notebook if something is't working well. And if you are want to train a ANN restart your Kernel.

TensorFlow 0.12 tutorials produce warning: "Rank of input Tensor should be the same as output_rank for column

I have some experience with writing machine learning programs in python, but I'm new to TensorFlow and am checking it out. My dev environment is a lubuntu 14.04 64-bit virtual machine. I've created a python 3.5 conda environment from miniconda and installed TensorFlow 0.12 and its dependencies. I began trying to run some example code from TensorFlow's tutorials and encountered this warning when calling fit() in the boston.py example for input functions: source.
WARNING:tensorflow:Rank of input Tensor (1) should be the same as
output_rank (2) for column. Will attempt to expand dims. It is highly
recommended that you resize your input, as this behavior may change.
After some searching in Google, I found other people encountered this same warning:
https://github.com/tensorflow/tensorflow/issues/6184
https://github.com/tensorflow/tensorflow/issues/5098
Tensorflow - Boston Housing Data Tutorial Errors
However, they also experienced errors which prevent code execution from completing. In my case, the code executes with the above warning. Unfortunately, I couldn't find a single answer in those links regarding what caused the warning and how to fix the warning. They all focused on the error. How does one remove the warning? Or is the warning safe to ignore?
Cheers!
Extra info, I also see the following warnings when running the aforementioned boston.py example.
WARNING:tensorflow:*******************************************************
WARNING:tensorflow:TensorFlow's V1 checkpoint format has been
deprecated. WARNING:tensorflow:Consider switching to the more
efficient V2 format: WARNING:tensorflow:
'tf.train.Saver(write_version=tf.train.SaverDef.V2)'
WARNING:tensorflow:now on by default.
WARNING:tensorflow:*******************************************************
and
WARNING:tensorflow:From
/home/kade/miniconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py:1053
in predict.: calling BaseEstimator.predict (from
tensorflow.contrib.learn.python.learn.estimators.estimator) with x is
deprecated and will be removed after 2016-12-01. Instructions for
updating: Estimator is decoupled from Scikit Learn interface by moving
into separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion: est = Estimator(...) -> est =
SKCompat(Estimator(...))
UPDATE (2016-12-22):
I've tracked the warning to this file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column_ops.py
and this code block:
except NotImplementedError:
with variable_scope.variable_scope(
None,
default_name=column.name,
values=columns_to_tensors.values()):
tensor = column._to_dense_tensor(transformed_tensor)
tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
variable = [
contrib_variables.model_variable(
name='weight',
shape=[tensor.get_shape()[1], num_outputs],
initializer=init_ops.zeros_initializer(),
trainable=trainable,
collections=weight_collections)
]
predictions = math_ops.matmul(tensor, variable[0], name='matmul')
Note the line: tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
The method signature is: _reshape_real_valued_tensor(input_tensor, output_rank, column_name=None)
The value 2 is hardcoded as the value of output_rank, but the boston.py example is passing in an input_tensor of rank 1. I will continue to investigate.
If you specify the shape of your tensor explicitly:
tf.constant(df[k].values, shape=[df[k].size, 1])
the warning should go away.
After I specify the shape of the tensor explicitly.
continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}
It works!

Copying parameters of a layer in Keras

I'm trying to take the last layer in a model (old model) and make a new model of only one layer (new model) that has the exact same parameters as the last layer of the old model. I want to do this in a way that's agnostic to what the last layer of the old model happens to be. I'm trying to do it with this code, but am getting an error.
newModel = Sequential()
newModel.add(type(oldModel.layers[-1])(oldModel.layers[-1].output_shape,
activation=oldModel.layers[-1].activation,
input_shape=oldModel.layers[-1].input_shape))
That yields the following error:
TypeError: __init__() missing 1 required positional argument: 'output_dim'
If I check the last layer in oldModel, it shows me this:
full_model.model.layers[-1]
>>>> <keras.layers.core.Dense at 0x7fe22010e128>
I tried adding output_dim to the list of parameters I'm copying in this way, but that didn't seem to help. It gave me this error instead when I did that:
Exception: Input 0 is incompatible with layer dense_8: expected ndim=2, found ndim=3
Any idea what I'm doing wrong here?
Found the answer myself. If, instead of making the input_shape the same as the input_shape of the last layer of the old model, I make it the output_shape of the penultimate layer of the old model and specify only [1:] of that output array, it works. Code that works is as follows:
newModel.add(type(oldModel.layers[-1])(oldModel.layers[-1].output_shape,
activation=oldModel.layers[-1].activation,
input_shape=oldModel.layers[-2].output_shape[1:]))

Resources