Discord.js - different variables in different servers - node.js

I have a bot that changes the variable to true when the radio station starts playing. I want to have a variable that is unique for the different servers it is used in. I don't know how to do this.
This is my code for the system :
const Radio = {
Predvaja : false,
Radio1 : false,
RadioCity : false,
RadioDJCity : false,
RadioCenter : false,
RadioCenter100 : false,
RadioMTV : false,
RadioPtuj : false,
RadioOtvoreni : false,
RadioAktual : false,
RadioVeseljak : false,
RadioDalmacija : false
}
if (msg === 'radio 1') {
Radio.Predvaja = true;
Radio.Radio1 = true;
}

Servers Map
You should create a global Map object in your bot. Have a look at examples how to use it.
The Map stores key-value pairs and by design it is optimized to do efficient lookups of keys. So you can use the server ID as a key for each piece of information about radio stations that are playing right now.
let stationsPerServer = new Map();
//when a server adds your bot
client.on("guildCreate", guild => {
stationsPerServer.set(guild.id, new Set());
});
//station activation per server
activateStation(guildId, stationName) {
const stations = stationsPerServer.get(guildId);
stations.add(stationName);
//add any other custom actions like broadcast to a voice channel
}
//station deactivation per server
activateStation(guildId, stationName) {
stationsPerServer.get(guildId).delete(stationName)
//add any other custom actions like leaving a voice channel
}
//is station playing in server
activateStation(guildId, stationName) {
return stationsPerServer.get(guildId).has(stationName)
}
//deactivate station for all servers
deactivateStationGlobally(stationName){
for (let stations of stationsPerServer.values()) {
stations.delete(stationName);
//add any other custom actions like leaving the voice channel
}
}
Stations vector
What you want to store is basically a binary vector. You have at least three options how to do that:
Dedicated object
That's your current solution. An object full of predefined boolean values. Not very flexible.
Using a set
It is much better to take a Set of station names that are playing right now. A Set stores each value only once and it is optimized for fast lookups. You can interpret the boolean value in your object structure as a query if a station is present in a set of active stations.
There are several advantages when using sets:
You can later add other stations easily, or remove existing ones if they stop broadcasting without changing the structure of your object.
You don't need to store the negative (false) values for each server. This would be storage-efficient if you had lots of stations and lots of servers.
If necessary, you can keep a single global set of all known stations for reference.
So per server you can do something like:
let activeStations = new Set(); //initialize the set
//add some stations
//(equals setting the station field to true)
activeStations.add('RadioPtuj');
activeStations.add('RadioDalmacia');
//now the set contains {'RadioPtuj', 'RadioDalmacia'}
//adding a station twice does not change the set
// a set stores each member just once,
// unlike an array that does not care about duplicates
activeStations.add('RadioDalmacia');
//the set still contains only {'RadioPtuj', 'RadioDalmacia'}
activeStations.add('RadioDalmacia');
//the set still contains only {'RadioPtuj', 'RadioDalmacia'}
//find out if a station is active
//(equals checking the boolean station field)
activeStations.has('RadioPtuj'); //true
activeStations.has('RadioOtvoreni'); //false
//remove an inactive station
//(equals setting the station field to false)
activeStations.delete('RadioDalmacia');
//now the set contains {'RadioPtuj'}
//deletion of a station not present in the set has no effect
activeStations.delete('RadioMTV');
//the set still contains {'RadioPtuj'}
Enum
Javascript does not support enums out of the box, but you could assign binary codes to stations and use binary operations and masks to replace the set of strings with a set of numbers. This would be relevant if you had thousands of servers running your bot.
Adding more server data
The Map is great for keeping all the par-server data, not only the set of active stations. I assume you will need to store dispatcher objects once you start streaming the stations into voice channels etc. You can pack all this to a server class and have the Map store instances of such class for each server.

Related

Optimized way of face recognition using Azure Face API

I need to implement face recognition using azure face api . I have developed a programme which is able to find similiar faces using .net SDK . For my use case ,I need to click photo of a person from the webcam and find matching faces from images kept in azure cloud storage . Now, there could be thousand of images in azure cloud storage and in my current implementation of face recognition ,I'm iterating through all the images(kept in azure cloud storage ) and then matching them with the webcam image .
The concern here is :
The face api (provided by azure ) charges 1 dollar per thousand call . Is there a way the search could be optimized such that i don't have to scan the faces which i have already scanned for previous searches
public async Task<List<DetectedFaceAttributes>> FindSimiliarFacesWithAttributesFromContainer(IFaceClient client, string RECOGNITION_MODEL1, string sourceImageFileName)
{
string url = BlobBaseURL;
string sourceurl = sourceContainerURL;
var imagesInNovotraxContainer = await _blobService.GetNames();
IList<Guid?> targetFaceIds = new List<Guid?>();
var faceList = new List<DetectedFaceAttributes>();
// Detect faces from source image url.
IList<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{sourceurl}{sourceImageFileName}", RECOGNITION_MODEL1);
if (detectedFaces.Any())
{
foreach (var targetImageFileName in imagesInNovotraxContainer)
{
var faceattribute = new DetectedFaceAttributes();
// Detect faces from target image url.
var faces = await DetectFaceRecognizeWithAttributes(client, $"{url}{targetImageFileName}");
// Add detected faceId to list of GUIDs.
if (faces.Any())
{
targetFaceIds.Add(faces[0].FaceId.Value);
faceattribute.DetectedFace = faces[0];
faceattribute.ImageFileName = targetImageFileName;
faceList.Add(faceattribute);
}
}
// Find a similar face(s) in the list of IDs. Comapring only the first in list for testing purposes.
IList<SimilarFace> similarResults = await client.Face.FindSimilarAsync(detectedFaces[0].FaceId.Value, null, null, targetFaceIds);
var similiarFaceIDs = similarResults.Select(y => y.FaceId).ToList();
var returnDataTypefaceList = faceList.Where(x => similiarFaceIDs.Contains(x.DetectedFace.FaceId.Value)).ToList();
return returnDataTypefaceList;
}
else
{
throw new Exception("no face detected in captured photo ");
}
public async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string RECOGNITION_MODEL1)
{
// Detect faces from image URL. Since only recognizing, use the recognition model 1.
IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: RECOGNITION_MODEL1);
//if (detectedFaces.Any())
//{
// Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}` with ID : {detectedFaces.First().FaceId}");
//}
return detectedFaces.ToList();
}
Your implementation is not totally clear for me in terms of calls to Face API / your storage (what's behind "DetectFaceRecognizeWithAttributes"). But I think you are right in the fact that you missed something and your global processing is over costly.
What you should do depends on your target:
Is it face "identification"?
Or face "similarity"?
Both have the same logic, but they are using different API operations
Case 1 - Face identification
Process
The global process is the following: you will use a "Person Group" or "Large Person Group" (depending of the number of persons you have) to store data about faces that you already know (the one in you storage), and you will use this group to "identify" a new face. with that, you will do "1-n" search, not "1-1" as you do right now.
Initial setup (group creation):
Choose if you need Person Group or Large Person group, here are the actual limits depending on your pricing:
Person Group:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Large Person Group:
It can hold up to 1,000,000 persons.
Free-tier subscription quota: 1,000 large person groups.
S0-tier subscription quota: 1,000,000 large person groups.
Here I am using Person Group in the explanation, but it's the same methods.
When you know the one you need, create it using "Create" operation.
Then, for each person, you will have to create a "PersonGroup Person" using "PersonGroup Person - Create", and add the corresponding faces to it using "PersonGroup Person - Add Face". Once it is done, you never need to reprocess "detect" operation on those faces.
Then for the "run" part
When you have a new image that you want to compare:
Detect faces in your image with Detect endpoint of Face API
Get the face Ids of your result
Call Identify endpoint of Face API to try to identify those face Ids with your (large) person group
To limit the number of call, you can even do batches of identification calls (up to 10 "input" face Ids in 1 call - see doc).
Case 2 - Face similarity
Here you can use a "Face List" or "Large Face List" to store the faces that you already know, and pass the id of this list when calling "Find Similar" operation. Example with FaceList:
Start with "FaceList - Create" to create your list (doc)
Use "FaceList - Add Face" to add all the faces that you have currently in your blob (doc)
Then for the run, when you call "Find Similar", provide the ID of your FaceList in "faceListId" parameter and the id of the face you want to compare (from Face Detect call)

Bringing Active Directory Users using JNDI in multiple threads

I have designed an application which brings the users from the active directory to an MySQL database, and shows them on GUI. It also brings the groups of which a user is a member of.
So, my program works this way:
for(String domain : allConfiguredADomains) {
LdapContext domainCtx = getDomainCtx(domain);
// Bring all users from this domain and store them in DB
getAllUsersForDomain(domain, domainCtx);
// Bring all the groups for every user
getAllGroupsForUsersInTheDomain(domain, domainCtx)
}
void getAllUsersForDomain(String domain, LdapContext domainCtx) {
String filter = "(objectClass=User)"
NamingEnumeration<SearchResult> result = domainCtx.search(domain, filter, ..);
while(result.hasMoreElements()) {
SearchResult searchResult = (SearchResult) result.nextElement();
// Process and store in database
storeUserInDatabase(searchResult);
}
}
void getAllGroupsForUsersInTheDomain(String domain, LdapContext domainCtx) {
List<String> userDistinguishedNames = getAllUsersFromDatabase("distinguishedName");
for(String userDn : userDistinguishedNames) {
String filter = "(&(objectClass=Group)(distinguishedName=" + userDn + "))";
NamingEnumeration<SearchResult> result = domainCtx.search(domain, filter, ..);
List<String> allGroupsOfUser = new List<String>();
while(result.hasMoreElements()) {
SearchResult searchResult = (SearchResult) result.nextElement();
String groupDistinguishedName = searchResult.getAttributes().get("distinguishedName").get();
allGroupsOfUser.add(groupDistinguishedName);
}
// Store them in database
storeAllGroupsOfUserInDatabase(userDn, allGroupsOfUser);
}
}
This application, however, takes lot of time, when there are too many users in the active directory. So, I decided to implement parallelism (using Threading). I divided this using search filter on distinguishedName of a user.
String filter = "(&(objectClass=User)(distinguishedName=a*"))";
and so on.. in each thread while fetching users.
I got better performance, but still not so good. Can someone suggest
a better way ?
Also, I don't have an idea how can I introduce
parallelism while fetching groups ?
If someone has any suggestions to do this better with powershell or C#, please suggest, I am open to technology.
Please note: reading user attribute memberOf does not provide all groups, hence I am fetching groups separately.
I'm not an Active Directory expert - just wanted to share some thoughts.
Threading by alphabet letter allows a maximum of 26 threads. Have you considered creating search threads by some other attributes, group membership etc? This might let you create more threads.
Review the Active Directory docs to see whether there is a way to improve search performance (for example, with a database we could create an index).

Data tracking in DocumentDB

I was trying to keep the history of data (at least one step back) of DocumentDB.
For example, if I have a property called Name in document with value "Pieter". Now I am changing that to "Sam", I have to maintain the history , it was "Pieter" previously.
As of now I am thinking of a pre-trigger. Any other solutions ?
Cosmos DB (formerly DocumentDB) now offers change tracking via Change Feed. With Change Feed, you can listen for changes on a particular collection, ordered by modification within a partition.
Change feed is accessible via:
Azure Functions
DocumentDB (SQL) SDK
Change Feed Processor Library
For example, here's a snippet from the Change Feed documentation, on reading from the Change Feed, for a given partition (full code example in the doc here):
IDocumentQuery<Document> query = client.CreateDocumentChangeFeedQuery(
collectionUri,
new ChangeFeedOptions
{
PartitionKeyRangeId = pkRange.Id,
StartFromBeginning = true,
RequestContinuation = continuation,
MaxItemCount = -1,
// Set reading time: only show change feed results modified since StartTime
StartTime = DateTime.Now - TimeSpan.FromSeconds(30)
});
while (query.HasMoreResults)
{
FeedResponse<dynamic> readChangesResponse = query.ExecuteNextAsync<dynamic>().Result;
foreach (dynamic changedDocument in readChangesResponse)
{
Console.WriteLine("document: {0}", changedDocument);
}
checkpoints[pkRange.Id] = readChangesResponse.ResponseContinuation;
}
If you're trying to make an audit log I'd suggest looking into Event Sourcing.Building your domain from events ensures a correct log. See https://msdn.microsoft.com/en-us/library/dn589792.aspx and http://www.martinfowler.com/eaaDev/EventSourcing.html

in a civicrm webform, create multiple 'groups' fields

In CiviCRM webform, you can 'enable tag and groups'. Configuring those allows you to create option elements in the webform.
This creates one 'widget', one dropdown or set of checkboxes. I have two field instances where I want the user to select a group - say for example
which mailing lists do you want to receive (a,b,c)
what food are you interested in (d,e,f)
a,b,c,d,e and f are all groups. I can not change that.
How could I do that ?
A technical suggestion below, but first, I'd suggest that your real solution is to not use groups for the second question. Groups are set up nicely to handle mailing lists, but if it's to track interests, you'd be better off setting those up as custom fields. It'll solve this immediate issue, and it'll make it easier to deal with tandem searches and so forth (on list b and likes food d).
Now if you must have them as groups, you can create a fake field and move checkboxes into it using jQuery. Create the fake field with one option that you don't care about, but label it "What food are you interested in", or equivalent. Then, edit the Groups field that CiviCRM generated: label it more specifically as "which mailing lists...", and choose Static Options so it doesn't start offering up just any group for someone to choose.
Now, add the following javascript:
// first remove the dummy checkboxes in your fake field
$('#yourdummyfield .form-item').each( function() { $(this).remove(); });
// now move things into the dummy field
$('#yourdummyfield').append( $('#groupsfield .form-item-d');
$('#yourdummyfield').append( $('#groupsfield .form-item-e');
$('#yourdummyfield').append( $('#groupsfield .form-item-f');
From the form processing perspective, they'll all be evaluated as the "groups" field. However, they'll look separate. For better or worse, this will have to be adjusted as you add new groups fields.
After using Andrew Hunts suggestion for a while, I finally solved this on the server side, in a custom module, using webform logic as described here
http://www.webomelette.com/drupal-webform-submission-presave-hook
Basicly, on presave, I look for 2 custom fields containing group ids (mailing and food in the example). Then I add these to the CiviCRM groups field.
I'll add the code below, which has some more logic:
to make it flexible, I use one hidden field to contain the fieldkey
of the civicrm groups selector to add the other fields in. that
field is called 'the_groups_element' (but its not the groups element, it contains the key of the groups element)
there is only one foods group allowed, so before it adds you to a food group, it removes all other food groups from the groups selector.
You could probably make it even more generic, but since I had different logic for the different groups, this was suitable for me.
function getFoodGroups() {
// return foodgroups
}
function getMailGroups() {
// return mailgroups
}
function MYMODULE_webform_submission_presave($node, &$submission) {
$groupselm = '';
$groups_cid = false;
$foods_cid = false;
$mailings_cid = false;
// http://www.webomelette.com/drupal-webform-submission-presave-hook
foreach($node->webform['components'] as $cid=>$comp) {
if ($comp['form_key']=='the_groups_element') {
$groupselm = $comp['value'];
break;
}
}
if ($groupselm) {
foreach($node->webform['components'] as $cid=>$comp) {
if ($comp['form_key']==$groupselm) $groups_cid = $comp['cid'];
if ($comp['form_key']=='the_foods') $foods_cid = $comp['cid'];
if ($comp['form_key']=='the_mailings') $mailings_cid = $comp['cid'];
}
$group_gids = $submission->data[$groups_cid];
if (!$group_gids) $group_gids=array();
if ($foods_cid!==false && $submission->data[$foods_cid]) {
// remove all current foods
foreach ($group_gids as $gidx=>$group_gid) {
foreach (getFoodGroups() as $foodgroup) {
if ($group_gid==$foodgroup['gid']) {
if ($debug) drupal_set_message('removing foodgroup '.$foodgroup['gid']);
unset($group_gids[$gidx]);
}
}
}
// validate and add submitted regions
$foodsgids = $submission->data[$foods_cid];
if (!is_array($foodsgids)) $foodsgids = array($foodsgids);
foreach ($foodsgids as $foodsgid) {
foreach (getFoodGroups() as $foodgroup) {
if ($foodsgid==$foodgroup['gid']) {
$group_gids[]=$foodsgid;
break; // only one food allowed
}
}
}
}
if ($mailings_cid!==false && $submission->data[$mailings_cid]) {
// just add submitted mailings, dont remove any
$mailinggids = $submission->data[$mailings_cid];
if (!is_array($mailinggids)) $mailinggids = array($mailinggids);
foreach ($mailinggids as $mailinggid) {
foreach (getMailGroups() as $mailing) {
if ($mailinggid==$mailing['gid']) {
if ($debug) drupal_set_message('adding mailing '.$mailing['gid']);
$group_gids[]=$mailinggid;
}
}
}
}
$submission->data[$groups_cid] = array_unique($group_gids);
}

MongoDB Database Semaphores and Node.js Process.NextTick()

This may be a vary bad idea, or a possible solution that we have to a database concurrency problem.
We have a method that is called to do an update of a mongo record. We are seeing some concurrency problems - process A reads the record, process B reads the record, process A makes mods and saves the record, process makes B mods and saves the record. Because B reads after A, before A writes, it doesn't know about the changes A made, and we lose the data from A.
I'm wondering if we could not use a database semaphore, basically a field on the collection, that is a boolean. If we read the record at the start of the method, and the field is true, it's being edited. At that point, re-call the method using process.nexttick(), with the same data. Otherwise, set the semaphore, and carry on.
There would still be a bit of time between the read and the save, but it should be/could be faster than what we are doing now.
Be something like this. Any thoughts, anyone done anything like this? Will it even work?
function remove_source(service_id,session, next)
{
var User = Mongoose.model("User");
/* get the user, based on the session user id */
User.findById(session.me,function(err,user_info)
{
if (user_info.semaphore === true)
{
process.nextTick(remove_source(service_id,session,next));
}
else
{
user_info.semaphore = true;
user_info.save(function(err,user_new)
{
if (err) next(err,user_new);
else continue_on(null,user_new);
});
}
function continue_on(user_new)
{
etc.......
}
Edit: New Code:
The function now looks as follows. I'm doing individual updates to the arrays. This of course means that I now have the possibility, if the transaction fails between the first and second transactions, of having data out of sync. I'm thinking that I could simply resave the user object that I retrieved on entry into the function, overwriting my changes. I don't know if Mongoose/Mongo will not do the save if I have not changed that object, will have to try and see. Any more thoughts?
var User = Mongoose.model("User");
/* get the user, based on the session user id */
User.findById(session.me,function(err,user_info)
{
if (err)
{
next(err,user_info,null);
return;
}
if (!user_info || user_info.length === 0)
{
next(_e("ACCOUNT_NOT_FOUND"),"user_id: " + session.me);
return;
}
var source_service_info = _.where(user_info.credentials, {"source_service_id": service_id});
var source_service = source_service_info.source_service;
User.findByIdAndUpdate(session.me,{$pull: {"credentials": {"source_service_id": service_id}}},{},function(err,user_credential_removed)
{
if (err)
{
next(err,user_info,null);
return;
}
User.findByIdAndUpdate(session.me,{$pull: {"criteria": {"source_service": source_service}}},{},function(err,user_criteria_removed)
{
if (err)
{
next(err,user_info,null);
return;
}
else
{
next(null,user_criteria_removed);
}
});
});
});
};
The problem with your approach is that it just shortens the time during which the data could be read by a second process, it doesn't eliminate the problem.
The solution to this would be to set your semaphore in the same action as the read. I haven't used Mongoose, but in MongoDB you can use findAndModify to only return a User record if the semaphore is false, and if it is false, in one atomic operation, set the semaphore to true.
If you don't want to use findAndModify, you could first do an update that sets the semaphore true (or to some specific ID value so you know that it is YOUR semaphore) only if the semaphore is not set. Then, if that process succeeds, you could do the find (perhaps passing your semaphore ID as a criterion in the find). However, findAndModify, if it is available in Mongoose, would do that in one step.
A variation of that is described here: http://docs.mongodb.org/manual/tutorial/isolate-sequence-of-operations/ where you do a form of optimistic locking that checks that the old values are unchanged before changing them to the new values.
There is a variation on this that uses a separate table to simulate a two-phase commit: http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
Edited: Upon interchange below, this seems to be a schema and updating issue. Question may become something like: I have some entries in an array, and the ordinal index to those entries relates to some other arrays as well. How do I perform deletes without having mismatches?
Three off the top possibilities occur, depending on frequency in the real world vs QA test scenarios.
Consider adding a deleted flag but keeping the records in the same order. If someone toggles, reuse the same record, but fix however you want.
Use an associative array (JS object) for each element (not a feature from relational world.) If you need an order, add an array that lists the keys in order. Both have syntax to update without touching anything other that what has changed, and will not overwrite changes to different fields.
Use an associative array where the keys are numbers. Actual deletion won't hurt retrieval.
stuff = {}
stuff[1] = {some:'details'}
stuff[2] = {some:'details2'}
Was
1) Are you making changes to the same field? Make that into an array, and push changes, and pop the latest to read the current value.
2) Are you changing different fields, but data is getting trounced? Then there is better syntax to use for the updating. you can update field by field.
$set: { 'fielda': 'valuea' }
won't lose edits on previous fields
3) change your schema
4) change the timing on the processes so they don't overlap. Or so they do so in smaller subsets, that you can manage to prevent from overlapping.
I'd like to know, just out of interest, what multiple processes are needed to make updates on the same record? I don't work with anything that looks like that.

Resources