I'd like to load all existing images for publisher. But it's impossible to filter them by publisher.
It needs to perform chain of calls:
PagedList<VirtualMachinePublisher> publishers = azure
.virtualMachineImages()
.publishers()
.listByRegion("useast");
final PagedList<VirtualMachineOffer> offers = publisher.offers().list();
offers.loadAll();
return offers.stream()
.flatMap(offer -> {
final PagedList<VirtualMachineSku> skus = offer.skus().list();
return skus.stream();
})
.flatMap(sku -> {
final PagedList<VirtualMachineImage> images = sku.images().list();
return images.stream();
})
.collect(Collectors.toList());
Unfortunately, it takes too much time, I guess sku.images().list() loads images one by one instead of one request.
Is there any more efficient way to do this?
If I'm not wrong new version of API (azure-resourcemanager) also doesn't have filter by publisher.
Related
I need to implement face recognition using azure face api . I have developed a programme which is able to find similiar faces using .net SDK . For my use case ,I need to click photo of a person from the webcam and find matching faces from images kept in azure cloud storage . Now, there could be thousand of images in azure cloud storage and in my current implementation of face recognition ,I'm iterating through all the images(kept in azure cloud storage ) and then matching them with the webcam image .
The concern here is :
The face api (provided by azure ) charges 1 dollar per thousand call . Is there a way the search could be optimized such that i don't have to scan the faces which i have already scanned for previous searches
public async Task<List<DetectedFaceAttributes>> FindSimiliarFacesWithAttributesFromContainer(IFaceClient client, string RECOGNITION_MODEL1, string sourceImageFileName)
{
string url = BlobBaseURL;
string sourceurl = sourceContainerURL;
var imagesInNovotraxContainer = await _blobService.GetNames();
IList<Guid?> targetFaceIds = new List<Guid?>();
var faceList = new List<DetectedFaceAttributes>();
// Detect faces from source image url.
IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{sourceurl}{sourceImageFileName}", RECOGNITION_MODEL1);
if (detectedFaces.Any())
{
foreach (var targetImageFileName in imagesInNovotraxContainer)
{
var faceattribute = new DetectedFaceAttributes();
// Detect faces from target image url.
var faces = await DetectFaceRecognizeWithAttributes(client, $"{url}{targetImageFileName}");
// Add detected faceId to list of GUIDs.
if (faces.Any())
{
targetFaceIds.Add(faces[0].FaceId.Value);
faceattribute.DetectedFace = faces[0];
faceattribute.ImageFileName = targetImageFileName;
faceList.Add(faceattribute);
}
}
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);
var similiarFaceIDs = similarResults.Select(y => y.FaceId).ToList();
var returnDataTypefaceList = faceList.Where(x => similiarFaceIDs.Contains(x.DetectedFace.FaceId.Value)).ToList();
return returnDataTypefaceList;
}
else
{
throw new Exception("no face detected in captured photo ");
}
public async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string RECOGNITION_MODEL1)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: RECOGNITION_MODEL1);
//if (detectedFaces.Any())
//{
// Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}` with ID : {detectedFaces.First().FaceId}");
//}
return detectedFaces.ToList();
}
Your implementation is not totally clear for me in terms of calls to Face API / your storage (what's behind "DetectFaceRecognizeWithAttributes"). But I think you are right in the fact that you missed something and your global processing is over costly.
What you should do depends on your target:
Is it face "identification"?
Or face "similarity"?
Both have the same logic, but they are using different API operations
Case 1 - Face identification
Process
The global process is the following: you will use a "Person Group" or "Large Person Group" (depending of the number of persons you have) to store data about faces that you already know (the one in you storage), and you will use this group to "identify" a new face. with that, you will do "1-n" search, not "1-1" as you do right now.
Initial setup (group creation):
Choose if you need Person Group or Large Person group, here are the actual limits depending on your pricing:
Person Group:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Large Person Group:
It can hold up to 1,000,000 persons.
Free-tier subscription quota: 1,000 large person groups.
S0-tier subscription quota: 1,000,000 large person groups.
Here I am using Person Group in the explanation, but it's the same methods.
When you know the one you need, create it using "Create" operation.
Then, for each person, you will have to create a "PersonGroup Person" using "PersonGroup Person - Create", and add the corresponding faces to it using "PersonGroup Person - Add Face". Once it is done, you never need to reprocess "detect" operation on those faces.
Then for the "run" part
When you have a new image that you want to compare:
Detect faces in your image with Detect endpoint of Face API
Get the face Ids of your result
Call Identify endpoint of Face API to try to identify those face Ids with your (large) person group
To limit the number of call, you can even do batches of identification calls (up to 10 "input" face Ids in 1 call - see doc).
Case 2 - Face similarity
Here you can use a "Face List" or "Large Face List" to store the faces that you already know, and pass the id of this list when calling "Find Similar" operation. Example with FaceList:
Start with "FaceList - Create" to create your list (doc)
Use "FaceList - Add Face" to add all the faces that you have currently in your blob (doc)
Then for the run, when you call "Find Similar", provide the ID of your FaceList in "faceListId" parameter and the id of the face you want to compare (from Face Detect call)
I am implementing azure search in my application to provide autosuggestion feature like google, big, and amazon. I have implemented the same available github code using below URL.All is working fine but getting result in more than 1.5 second for each term of sentence.
https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToAutocomplete
Currently I am using two indexes for searching and created in basic tier. Below is code
public ActionResult Suggest(bool highlights, bool fuzzy, string term)
{
InitSearch();
// Call suggest API and return results
SuggestParameters sp = new SuggestParameters()
{
UseFuzzyMatching = fuzzy,
Top = 5,
Filter="name eq 'testid'",
OrderBy=new List<string>() { "Date desc"}
};
if (highlights)
{
sp.HighlightPreTag = "<b>";
sp.HighlightPostTag = "</b>";
}
DocumentSuggestResult suggestResult = _indexClient1.Documents.Suggest(term, "index1",sp);
if (suggestResult.Results.Count<5)
{
SuggestParameters sp2 = new SuggestParameters()
{
UseFuzzyMatching = fuzzy,
Top = 5- suggestResult.Results.Count,
Filter = "Product eq 'PAAS'",
OrderBy = new List<string>() { "Count desc" }
};
if (highlights)
{
sp2.HighlightPreTag = "<b>";
sp2.HighlightPostTag = "</b>";
}
DocumentSuggestResult suggestResult2= _indexClient2.Documents.Suggest(term, "index2", sp2);
suggestResult.Results = suggestResult.Results.Union(suggestResult2.Results).Distinct().ToList();
// final = suggestResult.Results.GroupBy(s => s.Text, StringComparer.CurrentCultureIgnoreCase).ToList();
}
// Convert the suggest query results to a list that can be displayed in the client.
List<string> suggestions = suggestResult.Results.Select(x => x.Text).Distinct().ToList();
return new JsonResult
{
JsonRequestBehavior = JsonRequestBehavior.AllowGet,
Data = suggestions
};
}
To test it- when I am typing any word it's taking too much time in populating results around 1.5 to 1.8 seconds, it's working like other web app searchbox.
Timing I am checking using inspect element of chrome browser. Attaching sreenshot.see screenshot
Please suggest.
I answered a similar question on another post: Why is Azure Search taking 1400 miliiseconds to return query results for simple query
The main thing is, you shouldn't be using the Chrome timer to measure the performance of azure search. Use the "elapsed-time" field of the HTTP response you receive (take an average over multiple call), since it accurately tells you how much time was spent getting your results from azure search. The chrome timer can be affected by your network/machine configuration. If that doesn't help, you can follow the other tips I suggested in the post I linked above.
I have got a question related to CQRS in data centric processes. Let me explain it better.
Consider we have a SOAP/JSON/whatever service, which transfers some data to our system during an integration process. It is said that in CQRS every state change must be achieved by the means of commands (or events if Event Sourcing is used).
When it comes to our integrating process we have got a great deal of structured DATA instead of a set of commands/events and I am wondering how to actually process those data.
// Some Façade service
class SomeService
{
$_someService;
public function __construct(SomeService $someService)
{
$this->_someService = $someService;
}
// Magic function to make it all good and
public function process($dto)
{
// if I get it correctly here I need somehow
// convert incoming dto (xml/json/array/etc)
// to a set of commands, i. e
$this->someService->doSomeStuff($dto->someStuffData);
// SomeStuffChangedEvent raised here
$this->someService->doSomeMoreStuff($dtom->someMoreStuffData);
// SomeMoreStuffChangedEvent raised here
}
}
My question is whether my suggestion is suitable in the given case or there may be some better methods to do what I need. Thank you in advance.
Agreed, a service may have a different interface. If you create a rest-api to update employees, you may want to provide an UpdateEmployeeMessage which contains everything that can change. In a CRUD-kind of service, this message would probably mirror the database.
Inside of the service, you can split the message into commands:
public void Update(UpdateEmployeeMessage message)
{
bus.Send(new UpdateName
{
EmployeeId = message.EmployeeId,
First = message.FirstName,
Last = message.LastName,
});
bus.Send(new UpdateAddress
{
EmployeeId = message.EmployeeId,
Street = message.Street,
ZipCode = message.ZipCode,
City = message.City
});
bus.Send(new UpdateContactInfo
{
EmployeeId = message.EmployeeId,
Phone = message.Phone,
Email = message.Email
});
}
Or you could call the aggregate directly:
public void Update(UpdateEmployeeMessage message)
{
var employee = repository.Get<Employee>(message.EmployeeId);
employee.UpdateName(message.FirstName, message.LastName);
employee.UpdateAddress(message.Street, message.ZipCode, message.City);
employee.UpdatePhone(message.Phone);
employee.UpdateEmail(message.Email);
repository.Save(employee);
}
I need to write a service that connects to CRM, and returns with a list of all of the entity available on the server (custom or otherwise).
How can I do this? To be clear, I am not looking to return all data for all entities. Just a list of every type, regardless of whether any actually exist.
You need to use RetrieveAllEntitiesRequest
RetrieveAllEntitiesRequest request = new RetrieveAllEntitiesRequest()
{
EntityFilters = EntityFilters.Entity,
RetrieveAsIfPublished = true
};
// service is the IOrganizationService
RetrieveAllEntitiesResponse response = (RetrieveAllEntitiesResponse)service.Execute(request);
foreach (EntityMetadata currentEntity in response.EntityMetadata)
{
string logicalName = currentEntity.LogicalName;
// your logic here
}
note that you will get also system or hidden entities, like wizardpage or recordcountsnapshot
You will probably find these sections of the MSDN useful:
Customize Entity Metadata (lookout for the samples linked on that page).
Retrieve and Detect Changes to Metadata.
There is a task, using the .NET library for the Google Data API, to traverse across Google Drive folders, find required spreadsheets and change data of the selected spreadsheets.
Folders traversing is performed using the Google.GData.Documents.FolderQuery and other classes of the Google.GData.Documents namespace. After a correct document is found is necessary to manage it using the Google.GData.Spreadsheets.Spreadsheet class. Now I find a correspondence between the Google.GData.Documents.DocumentEntry class and the Google.GData.Spreadsheets.Spreadsheet class instances by extracting the document key from the document URL, iterating all spreadsheets, extracting a spreadsheet URL and comparing the two keys. The code looks like
private string GetKey(string url) {
string res = null;
Match match = Regex.Match(url, #"\?key=([A-Za-z0-9]+)");
if (match.Success) {
res = match.Groups[1].Value;
}
return res;
}
private SpreadsheetEntry GetSpreadSheetForDocument(SpreadsheetsService serviceSS, DocumentEntry entrySS) {
SpreadsheetEntry res = null;
string strSSKey = GetKey(entrySS.AlternateUri.Content);
Google.GData.Spreadsheets.SpreadsheetQuery query = new Google.GData.Spreadsheets.SpreadsheetQuery();
SpreadsheetFeed feed = serviceSS.Query(query);
foreach (SpreadsheetEntry entry in feed.Entries) {
if (GetKey(entry.AlternateUri.Content) == strSSKey) {
res = entry;
break;
}
}
return res;
}
Is there another, more elegant and correct, way to do this?
As best I can tell, not only is there no better way to do this, but even this technique will fail. As of recent(?) changes to Google Drive API, the keys for the SAME DOCUMENT retrieved by Document List versus Spreadsheets APIs are incompatible. Though using a spreadsheet URL constructed from a key returned by Document List API WILL get you a SpreadsheetEntry, spreadsheet operations on that entry are likely to produce "Invalid Token" Authentication Exceptions.
Your mileage may vary, depending on the authentication style you use. I am using the least recommended User Credentials method.