How to solve problem running MTCNN in JupiterLab - python-3.x

The function detect_faces() fails in JupiterLab:
image = Image.open(filename)
imageRGB = image.convert('RGB')
pixels = asarray(imageRGB)
detector = MTCNN()
results = detector.detect_faces(pixels)
mtcnn version 0.1.0
The error:
AbortedError: Operation received an exception:Status: 2, message:
could not create a descriptor for a softmax forward propagation
primitive, in file tensorflow/core/kernels/mkl/mkl_softmax_op.cc:306
[[node model/softmax/Softmax (defined at
/home/rikkatti/anaconda3/envs/poi/lib/python3.9/site-packages/mtcnn/mtcnn.py:342)
]] [Op:__inference_predict_function_828]
Function call stack: predict_function

that's probably due to the conflict of keras and tensorflow version.
I solved it by doing these steps:
Uninstall tensorflow from anaconda venv
pip install tensorflow==2.9.0

Try it this way using cv2
from mtcnn import MTCNN
import os
import cv2
detector = MTCNN()
dest_dir=r'C:\Temp\people\storage\cropped' # specify where to save the images
filename=r'C:\Temp\people\storage\34.png' # specify the file name full path
try:
img=cv2.imread(filename) # filename must be full path to the image
shape=img.shape # will cause an exception if image was not read properly
data=detector.detect_faces(img)
if data ==[]:
print ('no faces were detected for file ', filename)
else:
for i, faces in enumerate(data):
box= faces['box']
if box != []:
box[0]= 0 if box[0]<0 else box[0]
box[1]= 0 if box[1]<0 else box[1]
cropped_img=img[box[1]: box[1]+box[3],box[0]: box[0]+ box[2]]
fname=os.path.split(filename)[1]
index=fname.rfind('.')
fileid=fname[:index]
fext=fname[index:]
fname=fileid + '-' +str(i) + fext
save_path=os.path.join(dest_dir,fname )
cv2.imwrite(save_path, cropped_img)
except:
print(' an error occurred')
This will detect all faces in the image and store them as cropped images in the dest_dir. Tested it with an image with multiple faces and it works fine

Related

module 'torch' has no attribute 'frombuffer' in Google Colab

data_root = os.path.join(os.getcwd(), "data")
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize([0.5], [0.5]),
]
)
fashion_mnist_dataset = FashionMNIST(data_root, download = True, train = True, transform = transform)
Error Message
/usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py in read_sn3_pascalvincent_tensor(path, strict)
524 # we need to reverse the bytes before we can read them with torch.frombuffer().
525 needs_byte_reversal = sys.byteorder == "little" and num_bytes_per_value > 1
--> 526 parsed = torch.frombuffer(bytearray(data), dtype=torch_type, offset=(4 * (nd + 1)))
527 if needs_byte_reversal:
528 parsed = parsed.flip(0)
AttributeError: module 'torch' has no attribute 'frombuffer'
what can i do for this err in Colab
I tried your code in my Google Colab by adding the codes (to import the libraries) below, but it works well without errors.
import os
from torchvision import transform
from torchvision.datasets import FashionMNIST
I used
torchvision 0.13.0+cu113
google-colab 1.0.0
Runtime GPU (when I set "None," it also works)
Do you get errors when you also use the same codes above? Do you use another versions?

Getting Value Error for keras model when running in flask - Python 3

I have a facial recognition model which works fine when an image is passed to it, whether as a seperate file, or a webcam capture. However, I moved it to a browser basd application and sent the webcam image to the server which is a python code running Flask.
I tested to see if the image is properly processed, ran cv2.imread()on it, which worked fine. But when I passed that image for the model prediction, I got this error:
ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("dense_4/Softmax:0", shape=(?, 7),
dtype=float32) is not an element of this graph.
This wasn't happening otherwise, i.e when I ran the code as pure Python one (from the terminal, NOT as a browser application). This is the code:
#app.route('/image',methods=['POST'])
def image():
json_file = open(f'{model_path}/fer.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights(f"{model_path}/fer.h5")
i=request.files['image']
f = ('%s.jpeg' % time.strftime("%Y%m%d-%H%M%S"))
i.save('%s/%s' % (path, f))
full_size_image = cv2.imread(f"{path}/{f}")
try:
gray=cv2.cvtColor(full_size_image,cv2.COLOR_RGB2GRAY)
except Exception as e:
gray = full_size_image
face = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
faces = face.detectMultiScale(gray, 1.3 , 10)
artist,track=None,None
for (x, y, w, h) in faces:
print("face detected")
roi_gray = gray[y:y + h, x:x + w]
cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray, (48, 48)), -1), 0)
cv2.normalize(cropped_img, cropped_img, alpha=0, beta=1, norm_type=cv2.NORM_L2, dtype=cv2.CV_32F)
yhat= loaded_model.predict(cropped_img) #THIS IS WHERE THE ERROR OCCURS
break
return Response("%s saved"%f)
I've put a comment next to the line where the error occurs in the above code.
The function hasn't been changed at all, only the #app.route part is new
How do I resolve this error?
I found a fix in this which says that a Flask web request creates its own Tensorflow session , and not the default one, so you need to make it use the default session.
However this created another error along the lines of "Container location does not exist". The fix to this was found here, which combined with the solution from the first link, solved my problem

Is it possible to replace keras Lambda layers with native CoreML ones to convert the model?

For the first time I faced a need to convert mu keras model to coreml. This can be done via coremltools package,
import coremltools
import keras
model = Model(...) # keras
coreml_model = coremltools.converters.keras.convert(model,
input_names="input_image_NHWC",
output_names="output_image_NHWC",
image_scale=1.0,
model_precision='float32',
use_float_arraytype=True,
custom_conversion_functions={ "Lambda": convert_lambda },
input_name_shape_dict={'input_image_NHWC': [None, 384, 384, 3]}
)
However, I have two lambda layers, where the first one is depth-to-space (pixelshuffle) and another one is scaler:
def tf_upsampler(x):
return tf.nn.depth_to_space(x, 4)
def mulfunc(x, beta=0.2):
return beta*x
...
x = Lambda(tf_upsampler)(x)
...
x = Lambda(mulfunc)(x)
The only advice I found was as far as I understand, to use custom layer with the need to implement my layer in Swift code later. Something like this with MyPixelShuffle and MyScaleLayer to be implemented somehow as classes in XCode project (?):
def convert_lambda(layer):
# Only convert this Lambda layer if it is for our swish function.
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyPixelShuffle"
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "pixelshuffle"
params.parameters["blockSize"].intValue = 4
return params
elif layer.function == mulfunc:
# https://stackoverflow.com/questions/47987777/custom-layer-with-two-parameters-function-on-core-ml
params = NeuralNetwork_pb2.CustomLayerParams()
# The name of the Swift or Obj-C class that implements this layer.
params.className = "MyScaleLayer"
params.description = "scaling input"
# HERE!! This is important.
params.parameters["scale"].doubleValue = 0.2
# The desciption is shown in Xcode's mlmodel viewer.
params.description = "multiplication by constant"
return params
However, I found that CoreML actually has layers I need, they can be found as ScaleLayer and ReorganizeDataLayer
How can I use those native layers to replace lambdas in keras model? Is it possible to edit coreML protobuf for the network? Or if there are Swift/OBj-C classes for them, how they are called?
Can it be done via deleting/adding layers with coremltools.models.neural_network.NeuralNetworkBuilder ?
UPDATE:
I found that keras converter actually invokes Neural network builder to add different layers. Builder has the layer builder.add_reorganize_data I need. Now it's a question how to replace custom layers in the model. I can load it into builder and isnpect layers:
coreml_model_path = 'mymodel.mlmodel'
spec = coremltools.models.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=10)
[Id: 417], Name: lambda_10 (Type: custom)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['lambda_10_output']
It's much simpler to do something like this:
def convert_lambda(layer):
if layer.function == tf_upsampler:
params = NeuralNetwork_pb2.ReorganizeDataLayerParams()
params.fillInTheOtherPropertiesHere = someValue
return params
...etc..
In other words, you don't have to return a custom layer if some existing layer type already does what you want.
Okay, seems I have found a way to go. I created a virtual environment with separate copy of coremltools and edited _convert() method in _keras2_converter.py by adding the following code:
for iter, layer in enumerate(graph.layer_list):
keras_layer = graph.keras_layer_map[layer]
print("%d : %s, %s" % (iter, layer, keras_layer))
if isinstance(keras_layer, _keras.layers.wrappers.TimeDistributed):
keras_layer = keras_layer.layer
converter_func = _get_layer_converter_fn(keras_layer, add_custom_layers)
input_names, output_names = graph.get_layer_blobs(layer)
# this may be none if we're using custom layers
if converter_func:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
else:
if _is_activation_layer(keras_layer):
import six
if six.PY2:
layer_name = keras_layer.activation.func_name
else:
layer_name = keras_layer.activation.__name__
else:
layer_name = type(keras_layer).__name__
if layer_name in custom_conversion_functions:
custom_spec = custom_conversion_functions[layer_name](keras_layer)
else:
custom_spec = None
if layer.find('tf_up') != -1:
print('TF_UPSCALE found')
builder.add_reorganize_data(layer, input_names[0], output_names[0], mode='DEPTH_TO_SPACE', block_size=4)
elif layer.find('mulfunc') != -1:
print('SCALE found')
builder.add_scale(layer, W=0.2, b=0, has_bias=False, input_name=input_names[0], output_name=output_names[0])
else:
builder.add_custom(layer, input_names, output_names, custom_spec)
The trigger is layer name. In keras I use model.load_weights(by_name=True) and the following marking of my lambdas:
x = Lambda(mulfunc, name=scope+'mulfunc')(x)
x = Lambda(tf_upsampler,name='tf_up')(x)
Now the model at least has layers I need:
[Id: 417], Name: tf_up (Type: reorganizeData)
Updatable: False
Input blobs: ['up1_output']
Output blobs: ['tf_up_output']
Now it's time to validate on my VBox MacOS, will post what I've got.
UPDATE:
I don't see errors regarding my replaced Lambda layers, but have another one not allowing me to predict:
Layer 'concatenate_1' type 320 has 1 inputs but expects at least 2
I think it's related to the fact that Keras receives single input to concatenate layer which is a list of inputs. Looking how to fix this
UPDATE 2:
Tried to hotfix this by using add_concat_nd(self, name, input_names, output_name, axis) function of builder for my concatenate layers: got error during inference that this layer type is usupported (?!) :
if converter_func:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND')
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
else:
converter_func(builder, layer, input_names, output_names,
keras_layer, respect_trainable)
Unsupported layer type (CoreML.Specification.NeuralNetworkLayer)
UPDATE 4:
Found fix for this, and changed the way how builder is initialized:
builder = _NeuralNetworkBuilder(input_features, output_features, mode = mode,
use_float_arraytype=use_float_arraytype, disable_rank5_shape_mapping=True)
Now error message is gone, but I have problems with XCode version: model has version 4, while my Xcode supports 3. Will seek to update my VM.
"CoreML survival guide" pdf suggests in this case:
pip install -U git+https://github.com/apple/coremltools.git
For example, if loading a model using coremltools gives an error such as the following, then
try installing the latest coremltools directly from the GitHub repo.
Error compiling model: "Error reading protobuf spec. validator error:
The .mlmodel supplied is of version 3, intended for a newer version of
Xcode. This version of Xcode supports model version 2 or earlier.
UPDATE:
Updated from git. Have error:
No module named 'coremltools.libcoremlpython'
Looks like latest git is broken :(
Damn, it seems I need macos 10.15 and xcode 11
UPDATE 5:
Still fighting with bug on 10.15. Found that
coremltools somehow deduplicates inputs to Concatenate layer, so if you have in your keras code something like Concatenate()([x,x]) you will have concatenate layer with 1 inputs in coreml and error. To fix it I try to further modify the code above:
if layer.find('concatenate') != -1:
print('CONCATENATE FOUND', len(input_names))
if len(input_names) == 1:
input_names = [input_names[0],input_names[0]]
builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
[I faced this error: input layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4](coremltools: how to properly use NeuralNetworkMultiArrayShapeRange? which seems to be caused by coreml making input 3-dimensional CHW, while it must be 4 NWHC (?). Currently playing with the following
spec = coreml_model._spec
# fixing input shape
# https://stackoverflow.com/questions/59765376/coreml-model-convert-imagetype-model-input-to-multiarray
spec.description.input[0].type.multiArrayType.shape.extend([1, 384, 384, 3])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
coremltools.utils.save_spec(spec, "my.mlmodel")
Getting invalid blob shape from model internals

using opencv LineSegmentDetector to find line of an image

i have a color image and i should use opencv LineSegmentDetector algorithm to detect lines of the rectangles in the image
Here is my image:
i'm using this code :
import cv2
img = cv2.imread("rectangles.jpg",0)
#Create default parametrization LSD
lsd = cv2.createLineSegmentDetector(0)
#Detect lines in the image
lines = lsd.detect(img)[0]
#Draw detected lines in the image
drawn_img = lsd.drawSegments(img,lines)
#Show image
cv2.imshow("LSD",drawn_img )
cv2.waitKey(0)
and i'm getting this errpr:
<ipython-input-18-93ae667b0648> in <module>()
3
4 #Create default parametrization LSD
----> 5 lsd = cv2.createLineSegmentDetector(0)
6
7 #Detect lines in the image
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\lsd.cpp:143: error: (-213:The function/feature is not implemented) Implementation has been removed due original code license issues in function 'cv::LineSegmentDetectorImpl::LineSegmentDetectorImpl'
i checked open-cv version 4.1 documentation to use this method and here is the page , but i dont understand how should i use this method.
any help is appreciated.
Did you read the error message?
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\lsd.cpp:143: error: (-213:The function/feature is not implemented)
Implementation has been removed due original code license issues in function 'cv::LineSegmentDetectorImpl::LineSegmentDetectorImpl'
The class is not available due to license issues.
You can see that here in the original source.
You can also use Fast Line Detector which is available in OpenCV 4.1.
import cv2
img = cv2.imread("rectangles.jpg",0)
#Create default Fast Line Detector (FSD)
fld = cv2.ximgproc.createFastLineDetector()
#Detect lines in the image
lines = fld.detect(img)
#Draw detected lines in the image
drawn_img = fld.drawSegments(img,lines)
#Show image
cv2.imshow("FLD", drawn_img)
cv2.waitKey(0)
Result:

python opencv threshold red image

I am trying to threshold a BGR image after I separate the red channel, but
my code always return "Segmentation fault".
import numpy as np
import cv2
def mostrarVentana (titulo, imagen):
print('Mostrando imagen')
cv2.imshow(titulo, imagen)
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
img = cv2.imread('RepoImagenes/640x480/P5.jpg', 1) # loading image in BGR
redImg = img[:, :, 2] # extracting red channel
rbin, threshImg = cv2.threshold(redImg, 58, 255, cv2.THRESH_BINARY) # thresholding
mostrarVentana('Binary image', threshImg)
I have read the documentation on how to use the threshold() function and I can not figure out what's wrong. I only need to work on the red channel, how can I get this done?
I am using python 3.4 and opencv 3.1.0
First of all opencv provides a simple API to split n-channel image, using cv2.split() which would return a list of various channels in the image.
There is also a bug in your mostrarVentana method, you have never created a cv2.namedWindow() and you are directly referencing to cv2.imshow(), but you cannot simply cv2.imshow(), without creating a cv2.namedWindow().
Also you must be sure that the image is properly loaded and then access the desired channel, otherwise it would lead to weird errors. Your code with some scenario handling would look like this:
import numpy as np
import cv2
def mostrarVentana (titulo, imagen):
print('Mostrando imagen')
cv2.namedWindow(titulo, cv2.WINDOW_NORMAL)
cv2.imshow(titulo,imagen)
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
img = cv2.imread('RepoImagenes/640x480/P5.jpg', 1) # loading image in BGR
print img.shape #This should not print error response
if not img is None and len(img.shape) == 3 and img.shape[2] == 3:
blue_img, green_img, red_img = cv2.split(img) # extracting red channel
rbin, threshImg = cv2.threshold(red_img, 58, 255, cv2.THRESH_BINARY) # thresholding
mostrarVentana('Binary image', threshImg)
else:
if img is None:
print ("Sorry the image path was not valid")
else:
print ("Sorry the Image was not loaded in BGR; 3-channel format")

Resources