How do I query the vimeo api for a specific video title? - node.js

Hi I'm querying for a specific video by title - and at the moment I get mixed results.
my videos are all named with a consecutive number at the end ie ANDNOW2022_00112, ANDNOW2022_00113 etc
When I search /videos/?fields=uri,name&query=ANDNOW2022_00112 I get all of the videos returned
I've also tried the query_fields using
/me/videos?query_fields=title&sort=alphabetical&query=ANDNOW2022_00112
I just want the one I've searched for - or a no results returned.
At the moment I get all of the videos with AN2022 in the title/name. Now 'usually' the one I searched for is at the top of the list but not everytime.
Any tips appreciated.

Okay I'm not going mad :)
This is from Vimeo and is here for those with the same issue - basically to get t to work you need to understand that:
After speaking with our engineers, the current search capability are not "Exact" search.
When adding numbers or underscores the search is split into parts so "ANDNOW2022_00112" is transforming the query into the parts "andnow2022", "andnow", "2022", and "00112". So this is why your seeing these results. Our engineering team are in the process of improving the search capabilities and hope to provide a release in the near future.
Which means for now I'll have to rename my files.

Preface:
Vimeo does not currently offer an API endpoint for exact title search — but even if it did — it's possible to upload multiple videos and assign them identical titles. There's no way to use the API to positively identify a video by title — this is why every uploaded video is assigned a unique ID.
Solution:
Because the API returns data which includes an array of video objects, you can solve this problem in the same way you'd solve any similar problem in JavaScript where you have to find an element in an array: Array.prototype.find()
Here's how you can apply it to your problem:
Query the API using the parameters you described in your question.
You might also be interested in using the sort and direction parameters for greater control over a deterministic sort order.
Find the first item in the returned array of video objects that match your expected text exactly, and return it (or undefined if it doesn't exist)
Here's a code example with some static data from the API that was used to search for the video Mercedes Benz from the user egarage — note that I've omitted quite a few (irrelevant) fields from the response in order to keep the example small:
// Mocking fetch for this example:
function fetch (_requestInfo, _init) {
const staticJson = `{"total":2,"page":1,"per_page":25,"paging":{"next":null,"previous":null,"first":"/users/egarage/videos?query_fields=title&query=Mercedes%20Benz&sort=alphabetical&direction=asc&page=1","last":"/users/egarage/videos?query_fields=title&query=Mercedes%20Benz&sort=alphabetical&direction=asc&page=1"},"data":[{"uri":"/videos/61310450","name":"50th Anniversary of the Pagoda SL -- Mercedes-Benz Classic Vehicles","description":"Penned by designer Paul Bracq, the W113 SL had big shoes to fill: it had the incredible task of succeeding the original and instantly iconic 300 SL Gullwing. But you can't copy a legend, so Bracq designed one of his own. Straight lines replaced curves and a low-slung roof was replaced by a high top design that gave the car its nickname: the Pagoda.\\n\\nMUSIC: Developer Over Time","type":"video","link":"https://vimeo.com/61310450"},{"uri":"/videos/55837293","name":"Mercedes Benz","description":"To celebrate Mercedes Benz 125th birthday, the 2011 Pebble Beach Concours d’Elegance showcased the models that trace the lineage to Benz and Daimler —particularly Mercedes-Benz. This tribute chronicled early racing greats, coachbuilt classics, and preservation cars. Produced in association with DriveCulture.","type":"video","link":"https://vimeo.com/55837293"}]}`;
return Promise.resolve(new Response(staticJson));
}
async function fetchVideoByTitle (token, userId, videoTitle) {
const url = new URL(`https://api.vimeo.com/users/${userId}/videos`);
url.searchParams.set("query_fields", "title");
url.searchParams.set("query", videoTitle);
url.searchParams.set("sort", "alphabetical");
url.searchParams.set("direction", "asc");
const headers = new Headers([
["Authorization", `Bearer ${token}`],
]);
const response = await fetch(url.href, {headers});
const parsedJson = await response.json();
// Find the video that matches (if it exists):
const maybeFirstVideoObj = parsedJson.data.find(video => video.name === videoTitle);
return maybeFirstVideoObj;
}
async function main () {
const video = await fetchVideoByTitle(
"YOU_ACTUAL_TOKEN",
"egarage",
"Mercedes Benz",
);
console.log(video); // {name: "Mercedes Benz", link: "https://vimeo.com/55837293", ...}
}
main();

Related

Optimized way of face recognition using Azure Face API

I need to implement face recognition using azure face api . I have developed a programme which is able to find similiar faces using .net SDK . For my use case ,I need to click photo of a person from the webcam and find matching faces from images kept in azure cloud storage . Now, there could be thousand of images in azure cloud storage and in my current implementation of face recognition ,I'm iterating through all the images(kept in azure cloud storage ) and then matching them with the webcam image .
The concern here is :
The face api (provided by azure ) charges 1 dollar per thousand call . Is there a way the search could be optimized such that i don't have to scan the faces which i have already scanned for previous searches
public async Task<List<DetectedFaceAttributes>> FindSimiliarFacesWithAttributesFromContainer(IFaceClient client, string RECOGNITION_MODEL1, string sourceImageFileName)
{
string url = BlobBaseURL;
string sourceurl = sourceContainerURL;
var imagesInNovotraxContainer = await _blobService.GetNames();
IList<Guid?> targetFaceIds = new List<Guid?>();
var faceList = new List<DetectedFaceAttributes>();
// Detect faces from source image url.
IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{sourceurl}{sourceImageFileName}", RECOGNITION_MODEL1);
if (detectedFaces.Any())
{
foreach (var targetImageFileName in imagesInNovotraxContainer)
{
var faceattribute = new DetectedFaceAttributes();
// Detect faces from target image url.
var faces = await DetectFaceRecognizeWithAttributes(client, $"{url}{targetImageFileName}");
// Add detected faceId to list of GUIDs.
if (faces.Any())
{
targetFaceIds.Add(faces[0].FaceId.Value);
faceattribute.DetectedFace = faces[0];
faceattribute.ImageFileName = targetImageFileName;
faceList.Add(faceattribute);
}
}
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);
var similiarFaceIDs = similarResults.Select(y => y.FaceId).ToList();
var returnDataTypefaceList = faceList.Where(x => similiarFaceIDs.Contains(x.DetectedFace.FaceId.Value)).ToList();
return returnDataTypefaceList;
}
else
{
throw new Exception("no face detected in captured photo ");
}
public async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string RECOGNITION_MODEL1)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: RECOGNITION_MODEL1);
//if (detectedFaces.Any())
//{
// Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}` with ID : {detectedFaces.First().FaceId}");
//}
return detectedFaces.ToList();
}
Your implementation is not totally clear for me in terms of calls to Face API / your storage (what's behind "DetectFaceRecognizeWithAttributes"). But I think you are right in the fact that you missed something and your global processing is over costly.
What you should do depends on your target:
Is it face "identification"?
Or face "similarity"?
Both have the same logic, but they are using different API operations
Case 1 - Face identification
Process
The global process is the following: you will use a "Person Group" or "Large Person Group" (depending of the number of persons you have) to store data about faces that you already know (the one in you storage), and you will use this group to "identify" a new face. with that, you will do "1-n" search, not "1-1" as you do right now.
Initial setup (group creation):
Choose if you need Person Group or Large Person group, here are the actual limits depending on your pricing:
Person Group:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Large Person Group:
It can hold up to 1,000,000 persons.
Free-tier subscription quota: 1,000 large person groups.
S0-tier subscription quota: 1,000,000 large person groups.
Here I am using Person Group in the explanation, but it's the same methods.
When you know the one you need, create it using "Create" operation.
Then, for each person, you will have to create a "PersonGroup Person" using "PersonGroup Person - Create", and add the corresponding faces to it using "PersonGroup Person - Add Face". Once it is done, you never need to reprocess "detect" operation on those faces.
Then for the "run" part
When you have a new image that you want to compare:
Detect faces in your image with Detect endpoint of Face API
Get the face Ids of your result
Call Identify endpoint of Face API to try to identify those face Ids with your (large) person group
To limit the number of call, you can even do batches of identification calls (up to 10 "input" face Ids in 1 call - see doc).
Case 2 - Face similarity
Here you can use a "Face List" or "Large Face List" to store the faces that you already know, and pass the id of this list when calling "Find Similar" operation. Example with FaceList:
Start with "FaceList - Create" to create your list (doc)
Use "FaceList - Add Face" to add all the faces that you have currently in your blob (doc)
Then for the run, when you call "Find Similar", provide the ID of your FaceList in "faceListId" parameter and the id of the face you want to compare (from Face Detect call)

How to display arbitrary, schemaless data in HTML with node.js / mongodb

I'm using mongodb to store application error logs as json documents. I want to be able to format the error logs as HTML rather than returning the plain json to the browser. The logs are properly schemaless - they could change at any time, so it's no use trying to do this (in Jade):
- var items = jsonResults
- each item in items
h3 Server alias: #{item.ServerAlias}
p UUID: #{item.UUID}
p Stack trace: #{item.StackTrace}
h3 Session: #{item.Session}
p URL token: #{item.Session.UrlToken}
p Session messages: #{item.Session.SessionMessages}
as I don't know what's actually going to be in the JSON structure ahead of time. What I want is surely possible, though? Everything I'm reading says that the schema isn't enforced by the database but that your view code will outline your schema anyway - but we've got hundreds of possible fields that could be removed or added at any time so managing the views in this way is fairly unmanageable.
What am I missing? Am I making the wrong assumptions about the technology? Going at this the wrong way?
Edited with extra info following comments:
The json docs look something like this
{
"ServerAlias":"GBIZ-WEB",
"Session":{
"urltoken":"CFID=10989&CFTOKEN=f07fe950-53926E3B-F33A-093D-3FCEFB&jsessionid=84303d29a229d1",
"captcha":{
},
"sessionmessages":{
},
"sessionid":"84197a667053f63433672873j377e7d379101"
},
"UUID":"53934LBB-DB8F-79T6-C03937JD84HB864A338",
"Template":"\/home\/vagrant\/dev\/websites\/g-bis\/code\/webroot\/page\/home\/home.cfm, line 3",
"Error":{
"GeneratedContent":"",
"Mailto":"",
"RootCause":{
"Message":"Unknown tag: cfincflude.",
"tagName":"cfincflude",
"TagContext":[
{
"RAW_TRACE":"\tat cfhome2ecfm1296628853.runPage(\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/home\/home.cfm:3)",
"ID":"CFINCLUDE",
"TEMPLATE":"\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/home\/home.cfm",
"LINE":3,
"TYPE":"CFML",
"COLUMN":0
},
{
"RAW_TRACE":"\tat cfdisplay2ecfm1093821753.runPage(\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/display.cfm:6)",
"ID":"CFINCLUDE",
"TEMPLATE":"\/home\/vagrant\/dev\/websites\/nig-bis\/code\/webroot\/page\/display.cfm",
"LINE":6,
"TYPE":"CFML",
"COLUMN":0
}
]
}
}
... etc, but is likely to change depending on what the individual project that generates the log is configured to trigger.
What I want to end up with is a formatted HTML page with headers for each parent and the children listed below, iterating right through the data structure. The Jade sample above is effectively what we need to output, but without hard-coding that in the view.
Mike's analysis in the comments of the problem being that of creating a table-like structure from a bunch of collections that haven't really got a lot in common is bang-on. The data is relational, but only within individual documents - so hard-coding the schema into anything is virtually impossible as it requires you to know what the data structure looks like first.
The basic idea is what #Gates VP described. I use underscore.js to iterate through the arrays/objects.
function formatLog(obj){
var log = "";
_.each(obj, function(val, key){
if(typeof(val) === "object" || typeof(val) === "array"){
// if we have a new list
log += "<ul>";
log += formatLog(val);
log += "</ul>";
}
else{
// if we are at an endpoint
log += "<li>";
log += (key + ": " + val);
log += "</li>";
}
});
return log;
}
If you call formatLog()on the example data you gave it returns
ServerAlias: GBIZ-WEBurltoken: CFID=10989&CFTOKEN=f07fe950-53926E3B-F33A-093D-3FCEFB&jsessionid=84303d29a229d1sessionid: 84197a667053f63433672873j377e7d379101UUID: 53934LBB-DB8F-79T6-C03937JD84HB864A338Template: /home/vagrant/dev/websites/g-bis/code/webroot/page/home/home.cfm, line 3GeneratedContent: Mailto: Message: Unknown tag: cfincflude.tagName: cfincfludeRAW_TRACE: at cfhome2ecfm1296628853.runPage(/home/vagrant/dev/websites/nig-bis/code/webroot/page/home/home.cfm:3)ID: CFINCLUDETEMPLATE: /home/vagrant/dev/websites/nig-bis/code/webroot/page/home/home.cfmLINE: 3TYPE: CFMLCOLUMN: 0RAW_TRACE: at cfdisplay2ecfm1093821753.runPage(/home/vagrant/dev/websites/nig-bis/code/webroot/page/display.cfm:6)ID: CFINCLUDETEMPLATE: /home/vagrant/dev/websites/nig-bis/code/webroot/page/display.cfmLINE: 6TYPE: CFMLCOLUMN: 0
How to format it then is up to you.
This is basically a recursive for loop.
To do this with Jade you will need to use mixins so that you can print nested objects by calling the mixin with a deeper level of indentation.
Note that this whole thing is a little ugly as you won't get guaranteed ordering of fields and you may have to implement some logic to differentiate looping on arrays vs. looping on JSON objects.
You can try util.inspect. In your template:
pre
= util.inspect(jsonResults)

How to specify the class selector in web scraping using nodejs

Am trying to implement scraping using nodejs. Am sending request to a particular url and when i get the response, using the response am storing each products in a page into an array. For each product am trying to display the product details like product name, price and discount etc. Am doing this by the following code.
var $products = $body.find('.fashion-item');
$products.each(function (i, item) {
var $name = ($(item).find('.info .title').text(),
$price=$(item).find('span.price.regular').text().substr(6),
$discount=$(item).find('span.price.percentoff').text().slice(0,2);
self.items[i] = {
title: $name,
price: $price,
discount: $discount,
};
});
console.log(self.items);
It is working fine. If the class name is like 'fashion-item' or 'fashion-item-first' all this works fine. But if the class name contains the space in between the word, am not getting any products stored in array ($products) i.e. array length of $products is zero. My question is how to do the same thing if class name like this 'fashion-item first'. I tried very much, but i didn't get any idea. Please help me.
Class names cannot have spaces. In your example, 'fashion-item first', are two classes assigned to an element, which can be selected by either .fashion-item or .first selectors.
If you want item who have both info and title classe select them with this :
$(item).find('.info.title')
If you want item one of these classes, use this
$(item).find('.info,.title)

Editing Time Field in a Spotify App Playlist

I am having trouble removing the time field from a playlist in the Spotify App. I only want to display Song title, Artist and Album. Any help on this topic would be truly helpful.
You can specify which track attributes to display using the List object. An example is here: http://deceptacle.com/2011/12/26/spotify-app-api-some-things/
var list = new views.List(playlist, function (track) {
return new views.Track(track, views.Track.FIELD.NAME |
views.Track.FIELD.ARTIST |
views.Track.FIELD.ALBUM);
});
$("#playlist").append(list.node);
Update
More fleshed out example:
var pl = m.Playlist.fromURI("spotify:user:rhino_records:playlist:6sSFeKDgDxVR81YqNOuPf2");
var list = new v.List(pl, function (track) {
return new views.Track(track, views.Track.FIELD.NAME |
views.Track.FIELD.ARTIST |
views.Track.FIELD.ALBUM);
});
document.body.appendChild(list.node);
m is the models namespace, v is the views namespace. For some reason, it's not quite working for me. It seems like the observer methods aren't being called at the moment. Seems to be a spotify backend thing though. It seems to get called eventually after some prodding

How do I optimize the loading time of Google Earth Web Plugin if I have a very large KML file?

My client has a very large KML file, it contains around 6000+ placemarks and these placemarks have to appear as 3D models on the map. What's even worse is that the KML file does not contain that <link> tag for the COLLADA file, which means I have to traverse the KML file and attach a COLLADA file before it can be displayed on google earth. Now I can already traverse and add a 3D model for each placemark. I tested it with a very small KML file(which contains like less than 100 placemarks) and it worked great as expected...but when I used the very large KML file which had like 6000+ placemarks on it, the plugin just crashes and I am advised to refresh the browser, only to find it crashing again... :(
Any advice on this?
Is there a limit to the number of placemarks I can put on google earth web plugin?
How do I make the loading faster?
function render3DPoles(polesKmlLink){
google.earth.fetchKml(ge,polesKmlLink, function(object){
if(object){
var item = object.getFeatures().getChildNodes().item(0);
var folder = item.getFeatures().getChildNodes();
var latitude;
var longitude;
for(var i = 0; i<folder.getLength(); i++){
var placemark = folder.item(i);
latitude = placemark.getGeometry().getLatitude();
longitude = placemark.getGeometry().getLongitude();
var model = ge.createModel('');
var location = ge.createLocation('');
model.setLocation(location);
var link = ge.createLink('');
link.setHref('insert URL of collada file here');
model.setLink(link);
location.setLatitude(latitude);
location.setLongitude(longitude);
model.setLocation(location);
placemark.setGeometry(model);
ge.getFeatures().appendChild(placemark);
}
var la = ge.createLookAt('');
la.set(latitude,longitude, 25, ge.ALTITUDE_RELATIVE_TO_GROUND,180, 60, 100);
ge.getView().setAbstractView(la);
} else {
setTimeout(function() {
alert('Bad or null KML.');
},0);
}
});
}
I have dealt with dealing 10,000s of placemarks and other kml objects in the earth plugin and the best way I find is manage the data in a javascript object. dealing with them in standard kml would often crash things.
parse the kml before hand into something that you can handle as an object
start with:
var myModels = {};
for adding the first item if the id is 'foo' or a numeric do something like:
myModels['foo'] = {
description: 'some description for foo',
longitude: 100.12345,
latitude: 45.4567
linkUrl: 'http://mydomain.com/mymodels/foo.dae',
kmlObject: null};
now looking at the code from:
http://code.google.com/apis/ajax/playground/?exp=earth#creating_3d_models
you will have something similar to:
var placemark = ge.createPlacemark('');
placemark.setName('model');
var model = ge.createModel('');
.......
placemark.setGeometry(model);
now go:
myModals['foo'].kmlObject = placemark;
this will give you a quick reference to the model if you need to change it or remove it with just
myModels['foo'].kmlObject
no need to traverse the tree again.
one thing I found to be faster was to define kml client side and use parseKml to load it rather then create individual objects and add them.
also you can build in logic to only add the dae model that are within the area the user is looking at
to loop through all the items
var i, iModel;
for (i in myModels){
iModel = myModels[i];
if(myModels[m].kmlObject == null){
'add if statement to check if the models long/lats are in current view with
' iModel.longitude, iModel.latitude
}
}
also rather than add everything in one operation, add 10-20 model in a single batch and use setTimeout to start another batch operation a second later and leverage .executeBatch
https://developers.google.com/earth/documentation/reference/google_earth_namespace#ab26414915202d39cad12bcd5bd99e739
While there's no technical limit, the more you add the more resources get used. Depending on the complexity of your models and other data, you will be able to add more or less, but 6000 is quite a lot of to display at once even for simple markers.
One solution for a large/complex KML is to create regionated KMLs instead, such that you are loading just the data you need at any given time, based on visible regions (see http://code.google.com/apis/kml/documentation/regions.html )

Resources