Face authentication usinf MLKIT android studio - android-studio

Using mlkit I can able detect the face and also store the face. Now I want to verify the stored face with current face. How to implement in mlkit. I google but I cant. Please suggest how to implement. I prefer only mlkit not any other like opencv,..

MLKit only supports face detection, not face recognition or authentication.

You can compare two detected faces by comparing the distance between their facial landmarks, the distance between their facial contours, and the difference in their facial attributes. The threshold values are arbitrary and may need to be adjusted depending on your use case.
import com.google.mlkit.vision.face.Face
import com.google.mlkit.vision.face.FaceContour
fun compareFaces(face1: Face, face2: Face) {
// Compare the facial landmarks
val landmarks1 = face1.landmarks
val landmarks2 = face2.landmarks
// Compare the facial contours
val contours1 = face1.contours
val contours2 = face2.contours
// Compare the facial attributes
val attributes1 = face1.attributes
val attributes2 = face2.attributes
// Compare the facial landmarks
val leftEye1 = landmarks1.get(FaceLandmark.LEFT_EYE)
val leftEye2 = landmarks2.get(FaceLandmark.LEFT_EYE)
val leftEyeDistance = leftEye1.position.distanceTo(leftEye2.position)
// Compare the facial contours
val upperLipBottom1 = contours1.get(FaceContour.UPPER_LIP_BOTTOM)
val upperLipBottom2 = contours2.get(FaceContour.UPPER_LIP_BOTTOM)
val upperLipBottomDistance = upperLipBottom1.points.distanceTo(upperLipBottom2.points)
// Compare the facial attributes
val smilingProbability1 = attributes1.smilingProbability
val smilingProbability2 = attributes2.smilingProbability
val smilingProbabilityDiff = Math.abs(smilingProbability1 - smilingProbability2)
// Compare all the values and decide if the faces are similar or not
if (leftEyeDistance < 0.1 && upperLipBottomDistance < 0.1 && smilingProbabilityDiff < 0.1) {
Log.d(TAG, "The two faces are similar.")
} else {
Log.d(TAG, "The two faces are not similar.")
}
}

Related

pooling results from cox.zph? (multiple imputation)

Hi I'm new to multiple imputations and have a question about running survival analyses in imputed datasets.
I've run the primary model in five imputed datasets using the mice package in R:
imp <- mice(data, maxit = 5,
predictorMatrix = predM,
method = meth, print = FALSE)
long <- mice::complete(imp, action="long", include = TRUE)
long_mids<-as.mids(long)
coxmodel <- with(long_mids,
coxph(Surv(time, event) ~ predictor))
summary(pool(coxmodel))
And I want to test the PH assumption by running something like
test <- with(long_mids,
cox.zph(coxph(Surv(time, event) ~ predictor)))
But I'm not sure how to get the "pooled" results.
summary(pool(test))
This would give me an error (Error: No glance method for objects of class cox.zph)
and I get that the model doesn't have SE which pooling requires....
How should I test PH assumptions here then? I don't think I can just look at the results from five datasets and draw a conclusion, can I?
Thanks for any help with this!

Pyspark Kernel Density Estimation over multiple groups in parallel

Given a dataset consisting of money transactions, I am trying to use kernel density estimation to form clusters of transactions by their transaction amount. To do this, I identify the local minima of the density and use these as boundaries for the different clusters. I am able to do this on the whole dataset.
However, now I want to again use KDE, but use it on groups of data. That is, I want to estimate separate kernel densities for each group of transactions. The transactions are grouped on the basis of the counter party bank account from which they are sent. Currently, I use a naïve approach where I just loop over all counter parties. However, this is very inefficient, and as I am using spark I would like to be able to do this in parallel. I am not sure how to do this, as I am quite new to pyspark.
Any suggestions on how to do this?
Code that executes KDE over all data
from pyspark.mllib.stat import KernelDensity
from scipy.signal import argrelextrema
from matplotlib.pyplot import plot
from bisect import bisect
dat_rdd = sdf_pos.select("amount").rdd
dat_rdd_amounts = dat_rdd.map(lambda x: float(x[0]))
kd = KernelDensity()
kd.setBandwidth(10.0)
kd.setSample(dat_rdd_amounts)
s = np.linspace(0, 3000, num=50)
e = kd.estimate(s)
mi = argrelextrema(e, np.less)[0]
print("Minima:", s[mi])
minima_array = f.array([f.lit(i) for i in s[mi]])
user_func = f.udf(bisect)
sdf_pos = sdf_pos.withColumn("amount_group",
user_func(minima_array, f.col("amount")).cast('integer'))
Code that executes KDE separately for each group
counter_parties = sdf_pos.select("CP").distinct().collect()
sdf_pos = sdf_pos.withColumn("minima_array", f.array(f.lit(-1)))
dat_rdd = sdf_pos.select(["amount", "CP"]).rdd
for cp in counter_parties:
dat_rdd_amounts = dat_rdd.filter(lambda y: y[1] == cp[0]).map(lambda x: float(x[0]))
kd = KernelDensity()
kd.setBandwidth(10.0)
kd.setSample(dat_rdd_amounts)
s = np.linspace(0, 3000, num=50)
e = kd.estimate(s)
mi = argrelextrema(e, np.less)[0]
minima_array = f.array([f.lit(i) for i in s[mi]])
sdf_pos = sdf_pos.withColumn("minima_array",
f.when(f.col("CP") == cp[0], minima_array).otherwise(f.col("minima_array")))
user_func = f.udf(bisect)
sdf_pos = sdf_pos.withColumn("amount_group", user_func(f.col("minima_array"), f.col("amount")))

keras data generator for multi task learning with non image data format

I am working on a multi-task semantic segmentation problem with three decoders and thus, I need to feed three inputs and have three outputs. Furthermore, my datasets are not image formats(.jpg, ...) but they are .mat and .npy formats. My labels are having three values of 0,1,2 (maps with the same shape as my grayscale images). With these two in mind, I am trying to load the dataset using keras generators as my dataset is very large. Below is what I have tried based on keras documentation for generators, but to my knowledge, the documentation assumes the data as images and single task network. How can I adjust my code so that I can generators for multiple tasks and multiple data formats (non-image)?
def batch_generator(X_gen,Y_gen, amp_gen, phase_gen):
while true:
yield(X_gen.next(),Y_gen.next(), map1_gen.next(), map2_gen.next())
where map1_gen and map2_gen are supposed to be generators for the other two inputs (maps).
train_images_dir = ''
train_masks_dir = ''
train_map1_dir = ''
train_map2_dir = ''
val_images_dir = ''
val_masks_dir = ''
val_map1_dir = ''
val_map2_dir = ''
datagen = ImageDataGenerator()
train_images_generator = datagen.flow_from_directory(train_images_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size,class_mode=None)
train_mask_generator = datagen.flow_from_directory(train_masks_dir,target_size=(Img_Length,Img_Height, num_classes),batch_size=1,class_mode='categorical')
train_map1_generator = datagen.flow_from_directory(train_map1_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size,class_mode=None)
train_map2_generator = datagen.flow_from_directory(train_map2_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size ,class_mode=None)
#val augumentation.
val_images_generator = datagen.flow_from_directory(val_images_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size,class_mode=None)
val_masks_generator = datagen.flow_from_directory(val_masks_dir,target_size=(Img_Length,Img_Height, num_classes),batch_size=1,class_mode='categorical')
val_map1_generator = datagen.flow_from_directory(val_map1_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size,class_mode=None)
val_map2_generator = datagen.flow_from_directory(val_map2_dir,target_size=(Img_Length,Img_Height),batch_size=batch_size,class_mode=None)
model = ...
model.fit_generator(batch_generator(train_images_generator,train_mask_generator, train_map1_generator, train_map2_generator), validation_data=batch_generator(val_images_generator,val_masks_generator, val_map1_generator, val_map2_generator),callbacks=...)
The outputs of each decoder is supposed to be (Img_Length,Img_Height) segmentation map with three labels 0,1,2; map1 and map2 outputs with (Img_Length,Img_Height) size of linear values respectively.
You could try to implement a custom generator and dismiss the ImageDataGenerator completely. E.g.
def batch_generator(batchsize):
while True:
inputs1 = []
inputs2 = []
inputs3 = []
outputs1 = []
outputs2 = []
outputs3 = []
for _ in batchsize:
input1 = cv2.imread(img1) #or whatever
inputs1.append(input1)
inputs2.append(...)
...
# you may have to convert the lists into numpy arrays
yield([inputs1,inputs2,inputs3],[outputs1,outputs2,outputs3])
Basically, you directly yield a list of all your inputs and outputs, each of them being a batch.
But that means you would have to manually read them in but I think that makes sense considering you have some non-image datatypes.
You can then pass this generator to model.fit_generator (or just model.fit since tensorflow2)
model.fit_generator(batch_generator(batchsize))

Open cv compare two face embeddings

I went through Pyimagesearch face Recognition tutorial,
but my application need to compare two faces only,
I have embedding of two faces, how to compare them using opencv ?
about the trained model which is use to extract embedding from face is mentioned in link,
I want to know that what methods I should try to compare two face embedding.
(Note: I am new to this field)
First of all your case is similar to given tutorial, instead of multiple images you have single image that you need to compare with test image,
So you don't really need training step here.
You can do
# read 1st image and store encodings
image = cv2.imread(args["image"])
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
boxes = face_recognition.face_locations(rgb, model=args["detection_method"])
encodings1 = face_recognition.face_encodings(rgb, boxes)
# read 2nd image and store encodings
image = cv2.imread(args["image"])
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
boxes = face_recognition.face_locations(rgb, model=args["detection_method"])
encodings2 = face_recognition.face_encodings(rgb, boxes)
# now you can compare two encodings
# optionally you can pass threshold, by default it is 0.6
matches = face_recognition.compare_faces(encoding1, encoding2)
matches will give you True or False based on your images
Based on the article you mentioned, you can actually compare if two faces are the same using only the face_recognition library.
You can use the compare faces to determine if two pictures have the same face
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)

Sentiment-ranked nodes in dependency parse with Stanford CoreNLP?

I'd like to perform a dependency parse on a group of sentences and look at the sentiment ratings of individual nodes, as in the Stanford Sentiment Treebank (http://nlp.stanford.edu/sentiment/treebank.html).
I'm new to the CoreNLP API, and after fiddling around I still have no idea how I'd go about getting a dependency parse with ranked nodes. Is this even possible with CoreNLP, and if so, does anyone have experience doing it?
I modified the code of the inlcuded StanfordCoreNLPDemo.java file, to suit our sentiment needs:
Imports:
import java.io.*;
import java.util.*;
import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations;
import edu.stanford.nlp.neural.rnn.RNNCoreAnnotations.PredictedClass;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations;
import edu.stanford.nlp.sentiment.SentimentCoreAnnotations;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.*;
Initializing the pipeline. Properties include lemma and sentiment:
public class StanfordCoreNlpDemo {
public static void main(String[] args) throws IOException {
PrintWriter out;
if (args.length > 1) {
out = new PrintWriter(args[1]);
} else {
out = new PrintWriter(System.out);
}
PrintWriter xmlOut = null;
if (args.length > 2) {
xmlOut = new PrintWriter(args[2]);
}
Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, parse, sentiment");
props.setProperty("tokenize.options","normalizeCurrency=false");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Adding the text. These 3 sentences are taken from the live demo of the site you linked. I print the top level annotation's keys as well, to see what you can access from it:
Annotation annotation;
if (args.length > 0) {
annotation = new Annotation(IOUtils.slurpFileNoExceptions(args[0]));
} else {
annotation = new Annotation("This movie doesn't care about cleverness, wit or any other kind of intelligent humor.Those who find ugly meanings in beautiful things are corrupt without being charming.There are slow and repetitive parts, but it has just enough spice to keep it interesting.");
}
pipeline.annotate(annotation);
pipeline.prettyPrint(annotation, out);
if (xmlOut != null) {
pipeline.xmlPrint(annotation, xmlOut);
}
// An Annotation is a Map and you can get and use the various analyses individually.
// For instance, this gets the parse tree of the first sentence in the text.
out.println();
// The toString() method on an Annotation just prints the text of the Annotation
// But you can see what is in it with other methods like toShorterString()
out.println("The top level annotation's keys: ");
out.println(annotation.keySet());
For the first sentence, I print its keys and sentiment. Then, I iterate through all its nodes. For each one, i print the leaves of that subtree, which would be the part of the sentence this node is referring to, the name of the node, its sentiment, its node vector(I don't know what that is) and its predictions.
Sentiment is an integer, ranging from 0 to 4. 0 is very negative, 1 negative, 2 neutral, 3 positive and 4 very positive. Predictions is a vector of 4 values, each one including a percentage for how likely it is for that node to belong to each of the aforementioned classes. First value is for the very negative class, etc. The highest percentage is the node's sentiment.
Not all nodes of the annotated tree have sentiment. It seems that each word of the sentence has two nodes in the tree. You would expect words to be leaves, but they have a single child, which is a node with a label lacking the prediction annotation in its keys. The node's name is the same word.
That is why I check for the prediction annotation before I call the function, which fetches it. The correct way to do this, however, would be to ignore the null pointer exception thrown, but I chose to elaborate, to make the reader of this answer understand that no information regarding sentiment is missing.
List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);
if (sentences != null && sentences.size() > 0) {
ArrayCoreMap sentence = (ArrayCoreMap) sentences.get(0);
out.println("Sentence's keys: ");
out.println(sentence.keySet());
Tree tree2 = sentence.get(SentimentCoreAnnotations.AnnotatedTree.class);
out.println("Sentiment class name:");
out.println(sentence.get(SentimentCoreAnnotations.ClassName.class));
Iterator<Tree> it = tree2.iterator();
while(it.hasNext()){
Tree t = it.next();
out.println(t.yield());
out.println("nodestring:");
out.println(t.nodeString());
if(((CoreLabel) t.label()).containsKey(PredictedClass.class)){
out.println("Predicted Class: "+RNNCoreAnnotations.getPredictedClass(t));
}
out.println(RNNCoreAnnotations.getNodeVector(t));
out.println(RNNCoreAnnotations.getPredictions(t));
}
Lastly, some more output. Dependencies are printed. Dependencies here could be also accessed by accessors of the parse tree(tree or tree2):
out.println("The first sentence is:");
Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
out.println();
out.println("The first sentence tokens are:");
for (CoreMap token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {
ArrayCoreMap aToken = (ArrayCoreMap) token;
out.println(aToken.keySet());
out.println(token.get(CoreAnnotations.LemmaAnnotation.class));
}
out.println("The first sentence parse tree is:");
tree.pennPrint(out);
tree2.pennPrint(out);
out.println("The first sentence basic dependencies are:");
out.println(sentence.get(SemanticGraphCoreAnnotations.BasicDependenciesAnnotation.class).toString(SemanticGraph.OutputFormat.LIST));
out.println("The first sentence collapsed, CC-processed dependencies are:");
SemanticGraph graph = sentence.get(SemanticGraphCoreAnnotations.CollapsedCCProcessedDependenciesAnnotation.class);
out.println(graph.toString(SemanticGraph.OutputFormat.LIST));
}
}
}

Resources