Specify what faces to index with AWS Rekognition and Lambda - node.js

My apologies in advance for the long post.
I am quite new to AWS Rekognition and Lambda, but I took on a project to build a facial recognition system using AWS S3, Rekognition and Lambda. I managed to get a working solution using a few of the Rekognition API's that are provided in the AWS JavaScript SDK Documentation, but it only works when there is one face in the input image. I started playing around with images that has multiple faces, but it doesn't give the response I'm looking for. After doing research I narrowed my problem down to the following:
I need to be able to specify what faces I want to index in an image with multiple faces using the indexFaces API.
NOTE: I'm using JavaScript.
My logic for a single face in an image is that I use the SearchFacesByImage API and I first see if I have indexed the face of Person 1 in the past to 'allFaces'. If I have, then I don't need to index Person 1's face again to 'allFaces', but if I have not, then I need to do that first.
Up until this point, everything works fine when I'm using an image with a single face as input. (See code example down below)
Here comes the problem, when I have an image with multiple faces, including the face of Person 1, it will index all the faces in that image, including Person 1's face again, and add it to the 'allFaces' collection, but what I want to achieve is where the system picks up that Person 1 has been indexed in the past, so it should not index Person 1 again, instead index all the other people in the image.
That's how I came to refine my problem to be able to specify what faces I want to index in an image that contains multiple faces, because if I can achieve that, then I can say that Person 1 has been indexed, so continue with Person 2.
In the indexFaces API, you can specify the "MaxFaces" and "QualityFilter" parameters. I have looked at that, but I don't believe that holds the answer to my solution so I'm steering away from that, unless it 100% holds the answer to my solution.
I'm also not sure if there might be an issue with my logic, or if my logic is okay but my lack in JavaScript knowledge is what's holding me back.
Here is what I've done thus far for a single face in an image:
const AWS = require('aws-sdk');
const s3 = new AWS.S3({apiVersion: "2006-03-01"});
const rekognition = new AWS.Rekognition();
//-----------------------------Exports Function-----------------------
exports.handler = function(event, context) {
bucket = event.Records[0].s3.bucket.name;
key = event.Records[0].s3.object.key;
console.log(bucket);
console.log(key);
searchingFacesByImage(bucket, key);
};
//--------------------------------------------------------------------
// Search for a face in an input image
function searchingFacesByImage(bucket, key) {
let params = {
CollectionId: "allFaces",
FaceMatchThreshold: 95,
Image: {
S3Object: {
Bucket: bucket,
Name: key
}
},
MaxFaces: 5
};
const searchingFace = rekognition.searchFacesByImage(params, function(err, searchdata) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
// console.log(JSON.stringify(searchdata, null, '\t'));
// if data.FaceMatches > 0 : There that face in the image exists in the collection
if (searchdata.FaceMatches.length > 0) {
console.log("Face is a match");
// Continue
} else {
console.log("Face is not a match");
console.log("Start indexing face to 'allFaces'");
indexToAllFaces(bucket, key);
}
}
});
return searchingFace;
}
//--------------------------------------------------------------------
// If face is not a match in 'allFaces', index face to 'allFaces' collection
function indexToAllFaces(bucket, key) {
let params = {
CollectionId: "allFaces",
DetectionAttributes: ['ALL'],
Image: {
S3Object: {
Bucket: bucket,
Name: key
}
}
};
const indexFace = rekognition.indexFaces(params, function(err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log("INDEXING TO 'allFaces'");
console.log(JSON.stringify(data, null, '\t'));
}
});
return indexFace;
}
//--------------------------------------------------------------------
Like I said, this works fine when using images with a single face, so that is why I'm hoping to add some logic to filter through the faces in an image with multiple faces and whoever's face has been indexed in the past, should not be indexed again.
Thanks in advance for any feedback.

You will need to use DetectFaces() to obtain a list of all faces detected in the image.
The, for each Face returned:
Use the BoundingBox to copy and crop the image, so it only shows the given face
Use SearchFacesByImage() to determine whether it is already in the Face Collection
If it is not in the Face Collection, use IndexImages() to add it to the Face Collection. It will add the single face to the collection.
From experience, it is also a good idea to associate an ExternalImageId with each image added. This can contain your own reference to the face and can be used in a database to store additional information about the Face (eg id, name, or which picture they came from). The ExternalImageId will be returned with cetain calls when the face is detected in images.

Related

How do I query the vimeo api for a specific video title?

Hi I'm querying for a specific video by title - and at the moment I get mixed results.
my videos are all named with a consecutive number at the end ie ANDNOW2022_00112, ANDNOW2022_00113 etc
When I search /videos/?fields=uri,name&query=ANDNOW2022_00112 I get all of the videos returned
I've also tried the query_fields using
/me/videos?query_fields=title&sort=alphabetical&query=ANDNOW2022_00112
I just want the one I've searched for - or a no results returned.
At the moment I get all of the videos with AN2022 in the title/name. Now 'usually' the one I searched for is at the top of the list but not everytime.
Any tips appreciated.
Okay I'm not going mad :)
This is from Vimeo and is here for those with the same issue - basically to get t to work you need to understand that:
After speaking with our engineers, the current search capability are not "Exact" search.
When adding numbers or underscores the search is split into parts so "ANDNOW2022_00112" is transforming the query into the parts "andnow2022", "andnow", "2022", and "00112". So this is why your seeing these results. Our engineering team are in the process of improving the search capabilities and hope to provide a release in the near future.
Which means for now I'll have to rename my files.
Preface:
Vimeo does not currently offer an API endpoint for exact title search — but even if it did — it's possible to upload multiple videos and assign them identical titles. There's no way to use the API to positively identify a video by title — this is why every uploaded video is assigned a unique ID.
Solution:
Because the API returns data which includes an array of video objects, you can solve this problem in the same way you'd solve any similar problem in JavaScript where you have to find an element in an array: Array.prototype.find()
Here's how you can apply it to your problem:
Query the API using the parameters you described in your question.
You might also be interested in using the sort and direction parameters for greater control over a deterministic sort order.
Find the first item in the returned array of video objects that match your expected text exactly, and return it (or undefined if it doesn't exist)
Here's a code example with some static data from the API that was used to search for the video Mercedes Benz from the user egarage — note that I've omitted quite a few (irrelevant) fields from the response in order to keep the example small:
// Mocking fetch for this example:
function fetch (_requestInfo, _init) {
const staticJson = `{"total":2,"page":1,"per_page":25,"paging":{"next":null,"previous":null,"first":"/users/egarage/videos?query_fields=title&query=Mercedes%20Benz&sort=alphabetical&direction=asc&page=1","last":"/users/egarage/videos?query_fields=title&query=Mercedes%20Benz&sort=alphabetical&direction=asc&page=1"},"data":[{"uri":"/videos/61310450","name":"50th Anniversary of the Pagoda SL -- Mercedes-Benz Classic Vehicles","description":"Penned by designer Paul Bracq, the W113 SL had big shoes to fill: it had the incredible task of succeeding the original and instantly iconic 300 SL Gullwing. But you can't copy a legend, so Bracq designed one of his own. Straight lines replaced curves and a low-slung roof was replaced by a high top design that gave the car its nickname: the Pagoda.\\n\\nMUSIC: Developer Over Time","type":"video","link":"https://vimeo.com/61310450"},{"uri":"/videos/55837293","name":"Mercedes Benz","description":"To celebrate Mercedes Benz 125th birthday, the 2011 Pebble Beach Concours d’Elegance showcased the models that trace the lineage to Benz and Daimler —particularly Mercedes-Benz. This tribute chronicled early racing greats, coachbuilt classics, and preservation cars. Produced in association with DriveCulture.","type":"video","link":"https://vimeo.com/55837293"}]}`;
return Promise.resolve(new Response(staticJson));
}
async function fetchVideoByTitle (token, userId, videoTitle) {
const url = new URL(`https://api.vimeo.com/users/${userId}/videos`);
url.searchParams.set("query_fields", "title");
url.searchParams.set("query", videoTitle);
url.searchParams.set("sort", "alphabetical");
url.searchParams.set("direction", "asc");
const headers = new Headers([
["Authorization", `Bearer ${token}`],
]);
const response = await fetch(url.href, {headers});
const parsedJson = await response.json();
// Find the video that matches (if it exists):
const maybeFirstVideoObj = parsedJson.data.find(video => video.name === videoTitle);
return maybeFirstVideoObj;
}
async function main () {
const video = await fetchVideoByTitle(
"YOU_ACTUAL_TOKEN",
"egarage",
"Mercedes Benz",
);
console.log(video); // {name: "Mercedes Benz", link: "https://vimeo.com/55837293", ...}
}
main();

HoloLens spatial mapping.SpatialSurfaceMesh update problem. C++/winrt

im working on the spatial mapping processing for my HoloLens project.
Somehow calling "SpatialSurfaceMesh::TryComputeLatestMeshAsync" keeps returning the same mesh data overtime.
Is there another process involved updating the observer?
void SpatialMapping::AddOrUpdateSurface(winrt::Windows::Perception::Spatial::SpatialCoordinateSystem const& coordinateSystem)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
SpatialBoundingBox axisAlignedBoundingBox =
{
{ 0.f, 0.f, 0.f },
{ 50.f, 50.f, 50.f },
};
SpatialBoundingVolume bounds = SpatialBoundingVolume::FromBox(coordinateSystem, axisAlignedBoundingBox);
m_surfaceObserver.SetBoundingVolume(bounds);
m_surfaceObserver.ObservedSurfacesChanged(
winrt::Windows::Foundation::TypedEventHandler
<SpatialSurfaceObserver, winrt::Windows::Foundation::IInspectable>
({ this, &SpatialMapping::Observer_ObservedSurfacesChanged })
);
}
void SpatialMapping::Observer_ObservedSurfacesChanged(winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceObserver const& sender
, winrt::Windows::Foundation::IInspectable const& object)
{
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
const auto mapContainingSurfaceCollection = sender.GetObservedSurfaces();
// Process surface adds and updates?.
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
InsertAsync(id, info);
}
}
}
Concurrency::task<void> SpatialMapping::InsertAsync(winrt::guid /*const&*/ id, winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceInfo /*const&*/ newSurfaceInfo)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
return concurrency::create_task([this, id, newSurfaceInfo]
{
const auto surfaceMesh = newSurfaceInfo.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions).get();
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, surfaceMesh);
});
}
Generation works, Update does not
Manuel attempt same problem:
winrt::Windows::Foundation::IAsyncAction SpatialMapping::CollectSurfacesManuel()
{
const auto mapContainingSurfaceCollection = m_surfaceObserver.GetObservedSurfaces();
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
auto mesh{ co_await info.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions) };
{
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, mesh);
}
}
}
MVCE:
Create a New Project with the template
"Holographic DirectX 11 App (UWP) C++/WinRT)"
Add the files:
https://github.com/lpnxDX/HL_MVCE_SpatialSurfaceMeshUpdateProblem.git
Replace m_main in AppView.h
We did some research and have some thoughts about your question right now, let me explain the findings
Has your Observer_ObservedSurfacesChanged method triggered exactly? Adding output statements or breakpoints can help you check it. Since SurfaceObserver should be always available, usually we need to check the availability of surfaceObserver in each frame and recreate a new one when necessary, sample code snippt please see here.
Have you set m_surfaceMeshOptions? It is not visible in the code you posted. If it is missing, you can configure it with the following statement:
surfaceMeshOptions-> IncludeVertexNormals = true;
Microsoft provided the Holographic spatial mapping sample, shows how to acquire spatial mapping data from Windows Perception in real-time. it is similar to your needs, to narrow down issue if it's an issue with your code, please try to check and run this example on your device
If after the above steps, you still can't solve the problem, could you provide an
MVCE, so that we can locate the issue or find a solution? Be careful to remove any privacy-related or other business function codes.
Your code has the following problems:
TryComputeLatestMeshAsync should be called from Observer_ObservedSurfacesChanged not from concurrency::create_task
TryComputeLatestMeshAsync return mesh with matrix, vertices and indices. Indices should be stored to a safe location at the first run, they don't change later. Vertices and matrix should copied as they returns. You shouldn't save the mesh itself, because its data updates from various threads.
ObservedSurfacesChanged shouldn't be called every frame. This is long running function.
Maybe it has something more. I would recommend to start from the sample mentioned earlier.

Optimized way of face recognition using Azure Face API

I need to implement face recognition using azure face api . I have developed a programme which is able to find similiar faces using .net SDK . For my use case ,I need to click photo of a person from the webcam and find matching faces from images kept in azure cloud storage . Now, there could be thousand of images in azure cloud storage and in my current implementation of face recognition ,I'm iterating through all the images(kept in azure cloud storage ) and then matching them with the webcam image .
The concern here is :
The face api (provided by azure ) charges 1 dollar per thousand call . Is there a way the search could be optimized such that i don't have to scan the faces which i have already scanned for previous searches
public async Task<List<DetectedFaceAttributes>> FindSimiliarFacesWithAttributesFromContainer(IFaceClient client, string RECOGNITION_MODEL1, string sourceImageFileName)
{
string url = BlobBaseURL;
string sourceurl = sourceContainerURL;
var imagesInNovotraxContainer = await _blobService.GetNames();
IList<Guid?> targetFaceIds = new List<Guid?>();
var faceList = new List<DetectedFaceAttributes>();
// Detect faces from source image url.
IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{sourceurl}{sourceImageFileName}", RECOGNITION_MODEL1);
if (detectedFaces.Any())
{
foreach (var targetImageFileName in imagesInNovotraxContainer)
{
var faceattribute = new DetectedFaceAttributes();
// Detect faces from target image url.
var faces = await DetectFaceRecognizeWithAttributes(client, $"{url}{targetImageFileName}");
// Add detected faceId to list of GUIDs.
if (faces.Any())
{
targetFaceIds.Add(faces[0].FaceId.Value);
faceattribute.DetectedFace = faces[0];
faceattribute.ImageFileName = targetImageFileName;
faceList.Add(faceattribute);
}
}
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);
var similiarFaceIDs = similarResults.Select(y => y.FaceId).ToList();
var returnDataTypefaceList = faceList.Where(x => similiarFaceIDs.Contains(x.DetectedFace.FaceId.Value)).ToList();
return returnDataTypefaceList;
}
else
{
throw new Exception("no face detected in captured photo ");
}
public async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string RECOGNITION_MODEL1)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: RECOGNITION_MODEL1);
//if (detectedFaces.Any())
//{
// Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}` with ID : {detectedFaces.First().FaceId}");
//}
return detectedFaces.ToList();
}
Your implementation is not totally clear for me in terms of calls to Face API / your storage (what's behind "DetectFaceRecognizeWithAttributes"). But I think you are right in the fact that you missed something and your global processing is over costly.
What you should do depends on your target:
Is it face "identification"?
Or face "similarity"?
Both have the same logic, but they are using different API operations
Case 1 - Face identification
Process
The global process is the following: you will use a "Person Group" or "Large Person Group" (depending of the number of persons you have) to store data about faces that you already know (the one in you storage), and you will use this group to "identify" a new face. with that, you will do "1-n" search, not "1-1" as you do right now.
Initial setup (group creation):
Choose if you need Person Group or Large Person group, here are the actual limits depending on your pricing:
Person Group:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Large Person Group:
It can hold up to 1,000,000 persons.
Free-tier subscription quota: 1,000 large person groups.
S0-tier subscription quota: 1,000,000 large person groups.
Here I am using Person Group in the explanation, but it's the same methods.
When you know the one you need, create it using "Create" operation.
Then, for each person, you will have to create a "PersonGroup Person" using "PersonGroup Person - Create", and add the corresponding faces to it using "PersonGroup Person - Add Face". Once it is done, you never need to reprocess "detect" operation on those faces.
Then for the "run" part
When you have a new image that you want to compare:
Detect faces in your image with Detect endpoint of Face API
Get the face Ids of your result
Call Identify endpoint of Face API to try to identify those face Ids with your (large) person group
To limit the number of call, you can even do batches of identification calls (up to 10 "input" face Ids in 1 call - see doc).
Case 2 - Face similarity
Here you can use a "Face List" or "Large Face List" to store the faces that you already know, and pass the id of this list when calling "Find Similar" operation. Example with FaceList:
Start with "FaceList - Create" to create your list (doc)
Use "FaceList - Add Face" to add all the faces that you have currently in your blob (doc)
Then for the run, when you call "Find Similar", provide the ID of your FaceList in "faceListId" parameter and the id of the face you want to compare (from Face Detect call)

Botframework responses with intent with lower score

I have a bot which uses two LUIS apps as a LuisRecognizers to guess the client intent. My question is why does the bot respond the intent which has the lowest score? I double checked this and if i manually check the score through the Luis dashbord then i received something like: IntentA with score 0.92 and IntentB with score 1. And if i pass the same input through the botframework it responses with IntentA which has lower score. Am i missing something?
I tried to play with intentThreshold, recognizeMode or recognizeOrder, all mentioned in docs, however not received better results.
If you consider the C# code of BotFramework you can see "the best intent from" function was implemented like the following:
protected virtual IntentRecommendation BestIntentFrom(LuisResult result)
{
return result.Intents.MaxBy(i => i.Score ?? 0d);
}
If you want to test this, you can override it in your LuisDialog, to see the details of its mechanism (by logging scores of intetnts).
as you can see the max score will be chosen at the decision point.
Also, you can find the Luis recognizer in NodeJs:
LuisRecognizer.recognize(utterance, model, (err, intents, entities) => {
if (!err) {
result.intents = intents;
result.entities = entities;
// Return top intent
var top: IIntent;
intents.forEach((intent) => {
if (top) {
if (intent.score > top.score) {
top = intent;
}
} else {
top = intent;
}
});
if (top) {
result.score = top.score;
result.intent = top.intent;
// Correct score for 'none' intent
// - The 'none' intent often has a score of 1.0 which
// causes issues when trying to recognize over multiple
// model. Setting to 0.1 lets the intent still be
// triggered but keeps it from trompling other models.
switch (top.intent.toLowerCase()) {
case 'builtin.intent.none':
case 'none':
result.score = 0.1;
break;
}
}
cb(null, result);
} else {
cb(err, null);
}
});
Again the same as C# code, the recognizer choosees the max score, if there exists an application model in Luis.
Therefore, this problem does not come from the client.
Hence, a suggestion can be considering the JSON response of the LUIS which is received to your client.
Have you tried from the LUIS dashboard your published model? I had the same problem because LUIS wasn't publishing my model correctly at the moment and it didn't catch the changes I made, so the trained model worked perfectly in the dashboard but the published not.
I tried the next day and it published everything correctly in both, the dashboard and the botframework.

Backbone.relational, real-time and handling large data

I'm building a real-time feed application using Backbone.js, node.js and socket.io.
My Feed is a collection of Update models. Displaying these, overriding Backbone.sync for integration with socket.io works fine.
The complication comes in that each Update has a set of comments associated with it. When I show each Update in the Feed view, I want to show a summary of the associated comments (number of comments and a single 'most poular' comment), and also have the ability to click through to a different view to display each Update on its own with a paginated list of comments with further data.
I'm using backbone-relational to model the relationship between the Update model and Comment model, as follows:
Feed (collection) -> Update (model) -(has many)-> Comment (model)
I've been following this backbone-relational tutorial, but it seems to assume that I'd want to have all related data in memory at once in my Feed view, which I don't as there are potentially thousands of comments updating in real-time:
http://antoviaque.org/docs/tutorials/backbone-relational-tutorial/
My questions are:
How can I bring in summary data for comments to each Update in my Feed view without loading all comment data, and also maintain the ability to show paginated full data in my Update view?
I'm using backbone.layoutmanager for rendering my views. How best should I break my views up to accomplish the above?
For Q1:
I'm assuming you're using something like ioSync to use socket.io in Backbone.sync instead of REST API, or a similar solution.
Include metadata (such as # of comments) as an attribute on Update. If your Update object is heavy weight in itself, you could update the count using ioBind and custom server-side socket.io events instead of sending the whole object every time.
Include an attribute topComment as an additional one-to-one relation in Update. When initially loading Update from the server, include topComment in the response, but not the other comments.
Lazy-load the rest of the comments using custom socket.io events. You will likely want a server-side handler that takes as parameters updateId, startIndex, maxComments, which returns a list of comments for the given Update starting at the given index. If the result is sent to the client as JSON, then it's easy to do something like this on the client:
// Assume `model` is an instance of `Update`.
socket.emit('get_comments_page', {
updateId: model.get('id'),
startIndex: 1,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
model.get('messages').reset(data)
}
});
Avoid sending ID for all comments when fetching Update then trying to use fetchRelated to resolve them. I learned this one the hard way :O/
You could also store the comments collection directly on the view without associating it as relationship of Update
For Q2:
I don't have any experience with layoutmanager as I use Backbone.Marionette for managing my views. Marionette has an async extension (disclaimer: I'm a co-maintainer). I encourage to see how Marionette.async does the delayed rendering, waiting for the data to arrive from the server.
The main idea is to use jquery's Deferred objects that resolve when the data comes back from the server. Extending the above example with deferred:
var MyView = Backbone.View.extend({
// ... normal stuff that views need ...
initialize: function() {
var deferred = $.Deferred();
// Assume `model` is an instance of `Update`.
var that = this;
socket.emit('get_comments_page', {
updateId: that.model.get('id'),
startIndex: that.options.pageNumber,
maxComments: 10
}, function(err, data) {
if (err) {
alert('Unable to fetch comments: ', err);
} else {
that.model.get('messages').reset(data)
}
deferred.resolve();
});
this.promise = deferred.promise();
},
render: function() {
var that = this;
this.promise.done(function() {
// Do your normal rendering code here, for instance:
$(that.el).html(that.template(that.model.toJSON()));
});
return this;
}
});
Note: the code snippets above are not tested as is.

Resources