i have the following error for Parameter-set
this is the error that i got on one of my runs
Error Message - "parameter set cannot be resolved using the specified named parameters. one or more parameters issued can’t be used together or an insufficient number of parameters were provided"
this is my code
[cmdletbinding()]
Param (
[Parameter(Mandatory = $true, ParameterSetName = 'Back')]
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[string]$targetName,
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[string]$PersonalName,
[Parameter(Mandatory = $true, ParameterSetName = 'Back')]
[switch]$backing,
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[switch]$Delete,
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[switch]$BackAndDelete,
[Parameter(Mandatory = $false, ParameterSetName = 'BackAndDelete')]
[Parameter(Mandatory = $false, ParameterSetName = 'Delete')]
[string]$DeleteType
)
All of the parameters function work apart from when I use $backing, that causes the error (screenshot above)
im not to sure on how can i solve that issue? Maybe I should add remove the "[Parameter(...)]" specifications for the "targetname"
could you help me find a fix and make that error go away
Parameter set cannot be resolved using the specified named parameters:
When I ran the script in my environment, I received the same error.
Problem cause: PowerShell cannot be able to determine which parameterset to utilize. So, to avoid that we need to add defaultparameterset
To the cmdletbinding, I added DefaultParameterSet. I've taken your script and modified as shown below. I was able to proceed further successfully.
[cmdletbinding(DefaultParameterSetName = 'Back')]
Param (
[Parameter(Mandatory = $true, ParameterSetName = 'Back')]
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[string]$targetName,
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[string]$PersonalName,
[Parameter(Mandatory = $true, ParameterSetName = 'Back')]
[switch]$backing,
[Parameter(Mandatory = $true, ParameterSetName = 'Delete')]
[switch]$Delete,
[Parameter(Mandatory = $true, ParameterSetName = 'BackAndDelete')]
[switch]$BackAndDelete,
[Parameter(Mandatory = $false, ParameterSetName = 'BackAndDelete')]
[Parameter(Mandatory = $false, ParameterSetName = 'Delete')]
[string]$DeleteType
)
Output:
Related
I have created a CNN that does binary classification on images. The CNN is seen below:
def neural_network():
classifier = Sequential()
# Adding a first convolutional layer
classifier.add(Convolution2D(48, 3, input_shape = (320, 320, 3), activation = 'relu'))
classifier.add(MaxPooling2D())
# Adding a second convolutional layer
classifier.add(Convolution2D(48, 3, activation = 'relu'))
classifier.add(MaxPooling2D())
#Flattening
classifier.add(Flatten())
#Full connected
classifier.add(Dense(256, activation = 'relu'))
#Full connected
classifier.add(Dense(1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
train_datagen = ImageDataGenerator(rescale = 1./255,
horizontal_flip = True,
vertical_flip=True,
brightness_range=[0.5, 1.5])
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('/content/drive/My Drive/data_sep/train',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary')
es = EarlyStopping(
monitor="val_accuracy",
patience=15,
mode="max",
baseline=None,
restore_best_weights=True,
)
filepath = "/content/drive/My Drive/data_sep/weightsbestval.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = classifier.fit(training_set,
epochs = 50,
validation_data = test_set,
callbacks= callbacks_list
)
best_score = max(history.history['val_accuracy'])
return best_score
The images in the folders are organized in the following way:
-train
-healthy
-patient
-validation
-healthy
-patient
Is there a way to calculate the metrics Precision,Recall,Sensitivity and Specificity or at least the true positives,true negatives,false positive and false negatives from this code?
from sklearn.metrics import classification_report
test_set = test_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary')
predictions = model.predict_generator(
test_set,
steps = np.math.ceil(test_set.samples / test_set.batch_size),
)
predicted_classes = np.argmax(predictions, axis=1)
true_classes = test_set.classes
class_labels = list(test_set.class_indices.keys())
report = classification_report(true_classes, predicted_classes, target_names=class_labels)
accuracy = metrics.accuracy_score(true_classes, predicted_classes)
& if you do print(report) ,it will print everything
And if your whole data files are not divisible by your batch size, then use
test_set = test_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size = (320, 320),
batch_size = 1,
class_mode = 'binary')
I have created a CNN to do binary classification in keras with the following code:
def neural_network():
classifier = Sequential()
# Adding a first convolutional layer
classifier.add(Convolution2D(48, 3, input_shape = (320, 320, 3), activation = 'relu'))
classifier.add(MaxPooling2D())
# Adding a second convolutional layer
classifier.add(Convolution2D(48, 3, activation = 'relu'))
classifier.add(MaxPooling2D())
#Flattening
classifier.add(Flatten())
#Full connected
classifier.add(Dense(256, activation = 'relu'))
#Full connected
classifier.add(Dense(256, activation = 'sigmoid'))
#Full connected
classifier.add(Dense(1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
horizontal_flip = True,
vertical_flip=True,
brightness_range=[0.5, 1.5])
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('/content/drive/My Drive/data_sep/train',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary')
es = EarlyStopping(
monitor="val_accuracy",
mode="max",
patience
baseline=None,
restore_best_weights=True,
)
filepath = "/content/drive/My Drive/data_sep/weightsbestval.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
history = classifier.fit(training_set,
epochs = 10,
validation_data = test_set,
callbacks= es
)
best_score = max(history.history['val_accuracy'])
from sklearn.metrics import classification_report
predictions =(classifier.predict(test_set) > 0.5).astype("int32")
newlist = predictions.tolist()
finallist = []
for number in newlist:
finallist.append(number[0])
predicted_classes = np.asarray(finallist)
true_classes = test_set.classes
class_labels = list(test_set.class_indices.keys())
report = classification_report(true_classes, predicted_classes, target_names=class_labels)
accuracy = metrics.accuracy_score(true_classes, predicted_classes)
print(true_classes)
print(predicted_classes)
print(class_labels)
correct = 0
for i in range(len(true_classes)):
if (true_classes[i] == predicted_classes[i]):
correct = correct + 1
print(correct)
print((correct*1.0)/(len(true_classes)*1.0))
print(report)
return best_score
When I run the model I get a validation accuracy of 81.90% by model.fit()
But after finishing the model.predict validation accuracy is 40%.
I have added a callback where the best weights are restored. So what could be the problem here?
What fixed it for me was that I created another Image Data Generator variable
test2_datagen = ImageDataGenerator(rescale = 1./255)
test2_set = test2_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size = (320, 320),
batch_size = 32,
class_mode = 'binary',
Shuffle = False)
But as you can see I set Shuffle = False . I am posting this answer in case anyone has the same problem. So I used test2_set for for the prediction.
test2_set = test2_datagen.flow_from_directory('/content/drive/My Drive/data_sep/validate',
target_size= (320, 320),
batch_size= 32,
class_mode= 'binary',
shuffle= False)
Emphasis on the lowercase shuffle parameter, otherwise this code will fail
Since you are saving best model in this line
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
please load this model in your code , and then predict
from keras.models import load_model
loaded_model = load_model('data_sep/weightsbestval.hdf5')
Then
loaded_model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics['accuracy'])
score = loaded_model.evaluate(X_test, Y_test, verbose=0)
print ("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))
Plz vote / mark correct if you find this useful
I have some problems when I use kerastuner on Colab.
This's my code:
def build_model(hp):
input_ = Input(shape=(20,20,1))
cnn_out1 = Conv2D(filters=16, kernel_size=hp.Int('kernel_size', 3, 7, step=2, default=3) , name='con1',
use_bias=True, activation = tf.nn.relu, kernel_regularizer=keras.regularizers.l2(0.01))(input_)
# cnn_out1 = keras.layers.BatchNormalization(epsilon=1e-6)
cnn_out = MaxPooling2D(pool_size=(2, 2))(cnn_out1)
cnn_out = keras.layers.Flatten()(cnn_out)
cnn_out = Dense(64, activation=tf.nn.relu, name='fc1', use_bias=True)(cnn_out)
cnn_out = keras.layers.Dropout(0.5)(cnn_out)
cnn_out = Dense(64, activation=tf.nn.relu, name='fc2', use_bias=True)(cnn_out)
#将CNN层命名为cnn_model供后期TimeDistributed调用
cnn_model = Model(inputs=input_, outputs=cnn_out)
cnn_model_map = Model(inputs=input_, outputs=cnn_out1)
# cnn_model.summary()
input_seq = Input(shape=(time_seq, 20, 20, 1))
processed_sequences = TimeDistributed(cnn_model)(input_seq)
# rnn_out = keras.layers.GRU(units=128, name='gru1', recurrent_dropout=0.5)(processed_sequences)
rnn_out = keras.layers.GRU(units=128, name='gru1')(processed_sequences)
# rnn_out = keras.layers.LSTM(units=128, name='lstm1')(processed_sequences)
rnn_out = keras.layers.Dropout(0.5)(rnn_out)
predictions = Dense(6, activation='softmax', name='fc3')(rnn_out)
rnn_model = Model(inputs = input_seq, outputs = predictions)
#rnn_model.summary()
rnn_model.compile(loss='categorical_crossentropy', optimizer='Nadam', metrics=['accuracy'])
return rnn_model
tuner = kt.Hyperband(
build_model,
objective = 'val_accuracy',
max_epochs=10,
hyperband_iterations=2
)
tuner.search(data, label_ohe, validation_split=0.1, epochs=30, shuffle='true', callbacks=[tf.keras.callbacks.EarlyStopping(patience=1)])
I get the following hint when I run this code
INFO:tensorflow:Reloading Oracle from existing project ./untitled_project/oracle.json
INFO:tensorflow:Reloading Tuner from ./untitled_project/tuner0.json
INFO:tensorflow:Oracle triggered exit
Does anybody know how to solve this problem?
it is telling you the project already exist. Add overwrite=True in tuner = kt.Hyperband(
build_model,
objective = 'val_accuracy',
max_epochs=10,
hyperband_iterations=2
)
I try to apply k-fold cross validation to the cnn classification problem
let say I have a carA, carB
so I made the subfolder
car/trainCross/fold0 car/trainCross/fold1
car/validCross/fold0 car/validCross/fold1
and following code
model_path = '../carPrediction/model/'+ 'saved.hdf5'
for i in range(2):
print('training->',i,' split')
train_generator = train_datagen.flow_from_directory(TRAIN_CROPPED_PATH +'fold'+str(i),
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
color_mode='rgb')
print(VALID_CROPPED_PATH+'fold'+str(i))
validation_generator = valid_datagen.flow_from_directory(
VALID_CROPPED_PATH+'fold'+str(i),
target_size=(image_size,image_size),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
color_mode='rgb'
)
test_generator = test_datagen.flow_from_dataframe(
dataframe=df_test,
directory=TEST_CROPPED_PATH,
x_col='img_file',
y_col=None,
target_size= (image_size,image_size),
color_mode='rgb',
class_mode=None,
batch_size=batch_size,
shuffle=False
)
try:
model = load_model(model_path, compile=True)
except Exception as OSError:
pass
patient = 2
callbacks1 = [
EarlyStopping(monitor='val_loss', patience=patient, mode='min', verbose=1),
ReduceLROnPlateau(monitor = 'val_loss', factor = 0.5, patience = patient / 2, min_lr=0.00001, verbose=1, mode='min'),
ModelCheckpoint(filepath=model_path, monitor='val_loss', verbose=1, save_best_only=True, mode='min'),
]
history = model.fit_generator(
train_generator,
steps_per_epoch=get_steps(nb_train_sample, batch_size),
epochs=2,
validation_data=validation_generator,
validation_steps=get_steps(nb_validation_sample, batch_size),
verbose=1,
callbacks = callbacks1
)
but not sure in this way is correct
any thought?
I am trying to run the code on the link
Here is an example kernel where they use a pretrained VGG16 model as the encoder portion of a U-Net.
on the line
[t0_img], dm_img = next(train_gen)
I get the error ValueError: could not convert string to float: 'eb91b1c659a0_12' .
what can I do to fix this?
"""Using a pretrained model to segment
Here is an example kernel where we use a pretrained VGG16 model as the encoder portion of a U-Net and thus can
benefit from the features already created in the model and only focus on learning the specific decoding features.
The strategy was used with LinkNet by one of the top placers in the competition. I wanted to see how well it worked
in particular comparing it to standard or non-pretrained approaches, the code is setup now for VGG16 but can be
easily adapted to other problems"""
base_dir = r'E:\Python\carvana-image-masking-challenge\\'
all_img_df = pd.DataFrame(dict(path=glob(os.path.join(base_dir, 'train', '*.*'))))
all_img_df['key_id'] = all_img_df['path'].map(lambda x: splitext(os.path.basename(x))[0])
all_img_df['car_id'] = all_img_df['key_id'].map(lambda x: x.split('_')[0])
all_img_df['mask_path'] = all_img_df['path'].map(lambda x: x.replace('train', 'train_masks').replace('.jpg', '_mask.gif'))
all_img_df['exists'] = all_img_df['mask_path'].map(os.path.exists)
print(all_img_df['exists'].value_counts())
print(all_img_df.sample(3))
def read_diff_img(c_row):
t0_img = imread(c_row['path'])[:, :, 0:3]
cg_img = imread(c_row['mask_path'], as_gray=True)
return t0_img, cg_img
def make_change_figure(c_row):
a,c = read_diff_img(c_row)
fig, (ax1, ax3) = plt.subplots(1, 2, figsize=(21, 7))
ax1.imshow(a)
ax1.set_title('Before')
ax1.axis('off')
d = skimage.measure.label(c)
ax3.imshow(d, cmap='nipy_spectral_r')
ax3.set_title('Changes')
ax3.axis('off')
return fig
_, t_row = next(all_img_df.sample(1).iterrows())
make_change_figure(t_row).savefig('overview.png', dpi=300)
a,c = read_diff_img(t_row)
plt.imshow(c, cmap='nipy_spectral_r')
plt.show()
print(a.shape, c.shape)
"""Training and Validation Split
Here we split based on scene so the model doesn't overfit the individual images"""
from sklearn.model_selection import train_test_split
def train_test_split_on_group(in_df, col_id, **kwargs):
group_val = np.unique(in_df[col_id])
train_ids, test_ids = train_test_split(group_val, **kwargs)
return in_df[in_df[col_id].isin(train_ids)], in_df[in_df[col_id].isin(test_ids)]
train_df, valid_df = train_test_split_on_group(all_img_df, col_id='car_id', random_state=2018, test_size=0.2)
valid_df, test_df = train_test_split_on_group(valid_df, col_id='car_id', random_state=2018, test_size=0.5)
print(train_df.shape, 'training images')
print(valid_df.shape, 'validation images')
print(test_df.shape, 'test images')
# Augmenting Data
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import preprocess_input
dg_args = dict(featurewise_center=False
, samplewise_center=False
, rotation_range=5
, width_shift_range=0.01
, height_shift_range=0.01
, shear_range=0.01
, zoom_range=[0.9, 1.1]
, horizontal_flip=True
, vertical_flip=False # no upside down cars
, fill_mode = 'nearest'
, data_format = 'channels_last'
, preprocessing_function = preprocess_input)
IMG_SIZE = (512, 512) # slightly smaller than vgg16 normally expects
default_batch_size = 8
core_idg = ImageDataGenerator(**dg_args)
mask_args = dg_args.copy()
mask_args['preprocessing_function'] = lambda x: x/255.0
mask_idg = ImageDataGenerator(**mask_args)
def flow_from_dataframe(img_data_gen, in_df, path_col, y_col, **dflow_args):
# base_dir = E:\Python\carvana-image-masking-challenge\\train
base_dir = os.path.dirname(in_df[path_col].values[0])
print('## Ignore next message from keras, values are replaced anyways')
df_gen = img_data_gen.flow_from_directory(base_dir, class_mode='sparse', **dflow_args)
df_gen.filenames = in_df[path_col].values
df_gen.classes = np.stack(in_df[y_col].values)
df_gen.samples = in_df.shape[0]
df_gen.n = in_df.shape[0]
df_gen._set_index_array()
df_gen.directory = '' # since we have the full path
print('Reinserting dataframe: {} images'.format(in_df.shape[0]))
return df_gen
def make_gen(img_gen, mask_gen, in_df, batch_size=default_batch_size, seed=None, shuffle=True):
if seed is None:
seed = np.random.choice(range(9999))
flow_args = dict(target_size=IMG_SIZE, batch_size=batch_size, seed=seed, shuffle=shuffle, y_col='key_id')
t0_gen = flow_from_dataframe(img_gen, in_df, path_col='path', color_mode='rgb', **flow_args)
dm_gen = flow_from_dataframe(mask_gen, in_df, path_col='mask_path', color_mode='grayscale', **flow_args)
for (t0_img, _), (dm_img, _) in zip(t0_gen, dm_gen):
yield [t0_img], dm_img
train_gen = make_gen(core_idg, mask_idg, train_df)
valid_gen = make_gen(core_idg, mask_idg, valid_df, seed=0, shuffle=False)
test_gen = make_gen(core_idg, mask_idg, test_df, seed=0, shuffle=False, batch_size=2*default_batch_size)
[t0_img], dm_img = next(train_gen)
print(t0_img.shape, t0_img.max())
print(dm_img.shape, dm_img.max(), dm_img.mean())
I have fixed this problem, here's how you should change the code.
you don't need flow_from_dataframe function anymore so comment it or delete it because I've included the necessary codes in the make_gen function.
just fix the following part
dg_args = dict(featurewise_center=False
, samplewise_center=False
, rotation_range=5
, width_shift_range=0.01
, height_shift_range=0.01
, shear_range=0.01
, zoom_range=[0.9, 1.1]
, horizontal_flip=True
, vertical_flip=False # no upside down cars
, fill_mode = 'nearest'
, data_format = 'channels_last'
, preprocessing_function = preprocess_input)
IMG_SIZE = (512, 512) # slightly smaller than vgg16 normally expects
default_batch_size = 8
core_idg = ImageDataGenerator(**dg_args)
mask_args = dg_args.copy()
mask_args['preprocessing_function'] = lambda x: x/255.0
mask_idg = ImageDataGenerator(**mask_args)
def make_gen(img_gen, mask_gen, in_df, batch_size=default_batch_size, seed=None, shuffle=True):
if seed is None:
seed = np.random.choice(range(9999))
base_dir = os.path.dirname(train_df['path'].values[0])
mask_dir = os.path.dirname(train_df['mask_path'].values[0])
flow_args = dict(batch_size=batch_size,
shuffle=shuffle,
seed=seed)
t0_gen = img_gen.flow_from_dataframe(in_df, directory=base_dir, x_col='path', y_col='key_id', target_size=IMG_SIZE,
color_mode='rgb', class_mode='sparse', **flow_args)
dm_gen = mask_gen.flow_from_dataframe(in_df, directory=mask_dir, x_col='mask_path', y_col='key_id',
target_size=IMG_SIZE,
color_mode='grayscale', class_mode='sparse',validate_filenames=False, **flow_args)
for (t0_img, _), (dm_img, _) in zip(t0_gen, dm_gen):
yield [t0_img], dm_img
train_gen = make_gen(core_idg, mask_idg, train_df)
valid_gen = make_gen(core_idg, mask_idg, valid_df, seed = 0, shuffle=False)
test_gen = make_gen(core_idg, mask_idg, test_df, seed = 0, shuffle=False, batch_size=2*default_batch_size)
[t0_img], dm_img = next(train_gen)
print(t0_img.shape, t0_img.max())
print(dm_img.shape, dm_img.max(), dm_img.mean())