reading highest priority child from list in Firebase - priority-queue

I want to get the child with the highest priority in a Firebase's list, how do I do so without having to keep all the children at the client?
One way I could think of is to maintain a linked-list like data structure right into firebase:
head: 0
nodes:
0: {next: 10}, priority=1
10: {next: 4}, priority=23
4: {next: 2}, priority=45
2: {next: null}, priority=67
Then let say I update 0's priority to 46, this triggers on('child_moved') event with prevChildName=4, then I'll update the head pointer and next pointers of nodes 0, 10 and 2 accordingly...
The other way is to keep a list of IDs in 1 place and update it accordingly:
list: "0, 10, 4, 2"
But as the list grows, each update will be very costly in term of bandwidth.
Is there a better way to do this?

Use limit() and endAt() to get the highest priority. endAt with no arguments starts at the highest value.
var ref = FB.limit(2).endAt();
Here's a fiddle showing it in action. Fun fun fun! : )

I might be missing something, but this sounds like exactly what our query functions are meant for. To get the item with the lowest priority number, you should be able to do:
ref.startAt().limit(1).once('child_added', function(childSnapshot) {
// childSnapshot is the item with the lowest priority number (item 0 in your example).
});

Related

How can I get element by the Index from the OrderedTable in Nim?

Nim has OrderedTable how can I get the element by its index, and not by its key?
If it's possible - is this an efficient operation, like O(log n) or better?
import tables
let map = {"a": 1, "b": 2, "c": 3}.toOrderedTable
Something like
map.getByIndex(1) # No such method
P.S.
I'm currently using both seq and Table to provide both key and indexed access and wonder if it could be replaced by OrderedTable
type IndexedMap = ref object
list*: seq[float]
map*: Table[string, float]
There is no direct index access to ordered tables because of their internal structure. The typical way to access the elements in order is:
import tables
let map = {"a": 1, "b": 2, "c": 3}.toOrderedTable
for f in map.keys:
echo $f
Basically, accessing the keys iterator. If you click through the source link in the documentation, you reach the actual iterator code:
let L = len(t)
forAllOrderedPairs:
yield t.data[h].key
And if you follow the implementation of the forAllOrderedPairs template (it's recommended you are using an editor with jump to implementation capabilities to inspect such code easier):
if t.counter > 0:
var h = t.first
while h >= 0:
var nxt = t.data[h].next
if isFilled(t.data[h].hcode):
yieldStmt
h = nxt
No idea about performance there, but it won't be as fast as accessing a simple list/array, because the internal structure of OrderedTable contains a hidden data field with the actual keys and values, and it requires an extra conditional check to verify that the entry is actually being used. This implementation detail is probably a compromise to avoid reshuffling the whole list after a single item deletion.
If your accesses are infrequent, using the iterator to find the value might be enough. If benchmarking shows it's a bottleneck you could try freezing the keys/values iterator into a local list and use that instead, as long as you don't want to mutate further the OrderedTable.
Or return to your original idea of keeping a separate list.

Python 3.x indexing and slicing

I have been confused with python indexing and slicing of data structure(lists,etc.). Let me explain the problem. Suppose I have a python list as shown below.
examplelist = ['ram', 'everest' , 'apple', 32, 'cat', 'covid', 'vaccine', 19]
Example one
>> examplelist[-5 : 7 : -1]
>> [ ]
The result is empty set as shown above. Logic, explained in the python tutorial websites, I have checked is the starting count(-5) indicates item 32. End count 7 indicates one item before the stop/end count which is item 'vaccine'. Step size is -1 which means we need to move right to left. But since our starting item is 32 and end item is 'vaccine' there won't be any item if we move leftwards from 32. Hence the result empty list. OK, agreed. Now lets see another example.
Example Two
>> examplelist[::-1]
>> [19, 'vaccine', 'covid', 'cat', 32, 'apple', 'everest', 'ram']
This is quite commonly used to reverse a list in python data structure. If we use the same logic provided for example 1, how can this example have a reversed list. Logically, with starting count 0(meaning start item is 'ram') and end count all the way until the end of the list and with step size -1 means here too we need to move leftwards from starting item i.e. 'ram'. This too has no items in it if we use the same logic. But this example seems to work differently. Why? Is it that reversing a list is an exception to the logic behind python indexing/slicing.
Now lets see another example below.
Example Three
>> examplelist[:-3:-1]
>> [19, 'vaccine']
In this example our starting count is 0 (so we begin at first item i.e. 'ram'), end count is -3 which refers to one item before the end count i.e. item 'cat' and with step size -1 we move leftwards from start to end item. If we follow the logic there is no item to pick if we move leftwards from our starting item to end item. But the answer list is quite different.
My Confusion
I feel that there is no coherent logic working in all examples. Why does the same logic fails to different problem? My understanding it that there is always a standard logical explanation while coding. I tried to figure out some standard logic that will explain all types of indexing/slicing problem with python lists. But with examples listed above, my confusion still persists. Is it that there is a hole in my understanding or there is some standard explanation to this problem which I have not understood yet? Someone please rescue.
Using a negative step reverses default start and end values for a slice.
These are all equal:
examplelist[-5:7:-1]
examplelist[len(examplelist)-5 : 7 : -1]
examplelist[3 : 7 : -1]
And so are all these:
examplelist[::-1]
examplelist[-1::-1]
examplelist[len(examplelist)-1 : -len(examplelist)-1 : -1]
examplelist[7 : -9 : -1]
As well as these:
examplelist[:-3:-1]
examplelist[len(examplelist)-1 : -3 : -1]
examplelist[7 : 5 : -1]
Once you have positive start and end values, it becomes clear what is happening.
In the second example end value must be negative because there is no other way to represent the element before the first element.

The optimal data structure for filtering for objects that match criteria

I'll try to present the problem as generally as I can, and a response in any language will do.
Suppose there are a few sets of varying sizes, each containing arbitrary values to do with a category:
var colors = ["red", "yellow", "blue"] // 3 items
var letters = ["A", "B", "C", ... ] // 26 items
var digits = [0, 1, 2, 3, 4, ... ] // 10 items
... // each set has fixed amount of items
Each object in this master list I already have (which I want to restructure somehow to optimize searching) has properties that are each a selection of one of these sets, as such:
var masterList = [
{ id: 1, color: "red", letter: "F", digit: 5, ... },
{ id: 2, color: "blue", letter: "Q", digit: 0, ... },
{ id: 3, color: "red", letter: "Z", digit: 3, ... },
...
]
The purpose of the search would be to create a new list of acceptable objects from the master list. The program would filter the master list by given search criteria that, for each property, contains a list of acceptable values.
var criteria = {
color: ["red", "yellow"],
letter: ["A", "F", "P"],
digit: [1, 3, 5],
...
};
I'd like to think that some sort of tree would be most applicable. My understanding is that it would need to be balanced, so the root node would be the "median" object. I suppose each level would be defined by one of the properties so that as the program searches from the root, it would only continue down the branches that fit the search criteria, each time eliminating the objects that don't fit given the particular property for that level.
However, I understand that many of the objects in this master list will have matching property values. This connects them in a graphical manner that could perhaps be conducive to a speedy search.
My current search algorithm is fairly intuitive and can be done with just the master list as it is. The program
iterates through the properties in the search criteria,
with each property iterates over the master list, eliminating a number of objects that don't have a matching property, and
eventually removes all the objects that don't fit the criteria. There is surely some quicker filtering system that involves a more organized data structure.
Where could I go from here? I'm open to a local database instead of another data structure I suppose - GraphQL looks interesting. This is my first Stack Overflow question, so my apologies for any bad manners 😶
As I don't have the context of number of sets and also on the number of elements in the each set. I would suggest you some very small changes which will make things at-least relatively fast for you.
To keep things mathematical, I will define few terms here:
number of sets - n
size of masterlist - k
size of each property in the search criteria - p
So, from the algorithm that I believe you are using, you are doing n iterations over the search criteria, because there can be n possible keys in the search criteria.
Then in each of these n iterations, you are doing p iterations over the allowed values of that particular set. Finally, in each of these np iterations you are iterating over the master list, with k iterations ,and checking if this value of record should be allowed or not.
Thus, in the average case, you are doing this in O(npk) time complexity.
So, I won't suggest to change much here.
The best you can do is, change the values in the search criteria to a set (hashset) instead of keeping it as a list, and then iterate over the masterlist. Follow this Python code:
def is_possible(criteria, master_list_entry):
for key, value in master_list_entry.items(): # O(n)
if not key in criteria or value not in criteria[key]: # O(1) average
return False
return True
def search(master_list, criteria):
ans = []
for each_entry in master_list: # O(k)
if is_possible(criteria, each_entry): # O(n), check above
ans.append(each_entry)
return ans
Just call search function, and it will return the filtered masterlist.
Regarding the change, change your search criteria to:
criteria = {
color: {"red", "yellow"}, # This is a set, instead of a list
letter: {"A", "F", "P"},
digit: {1, 3, 5},
...
}
As, you can see, I have mentioned the complexities along with each line, thus we have reduced the problem to O(nk) in average case.

ArangoDB - Aggregate sum of descendant attributes in DAG

I have a bill of materials represented in ArangoDB as a directed acyclic graph. The quantity of each part in the bill of materials is represented on the edges while the part names are represented by the keys of the nodes. I'd like to write a query which traverses down the DAG from an ancestor node and sums the quantities of each part by its part name. For example, consider the following graph:
Qty: 2 Qty: 1
Widget +------> Gadget +------> Stuff
+ + Qty: 4
| Qty: 1 +---------> Thing
+----------------------------^
Widget contains two Gadgets, which each contains one Stuff and four Things. Widget also contains one Thing. Thus I'd like to write an AQL query which traverses the graph starting at widget and returns:
{
"Gadget": 2,
"Stuff": 2,
"Thing": 9
}
I believe collect aggregate may be my friend here, but I haven't quite found the right incantation yet. Part of the challenge is that all descendant quantities of a part need to be multiplied by their parent quantities. What might such a query look like that efficiently performs this summation on DAGs of depths around 10 layers?
Three possible options come to mind:
1.- return the values from the path and then summarize the data in the app server:
FOR v,e,p IN 1..2 OUTBOUND 'test/4719491'
testRel
RETURN {v:v.name, p:p.edges[*].qty}
This returns Gadget 2, Stuff [2,1], Thing [2,4], Thing [ 1 ]
2.- enumerate the edges on the path, to get the results directly :
FOR v,e,p IN 1..2 OUTBOUND 'test/4719491'
testRel
let e0 = p.edges[0].qty
let e1 = NOT_NULL(p.edges[1].qty,1)
collect itemName = v.name aggregate items = sum(e0 * e1)
Return {itemName: itemName, items: items}
This correctly returns Gadget 2, Stuff 2, Thing 9.
This obviously requires that you know the number of levels before hand.
3.- Write a custom function "multiply" similar to the existing "SUM" function so that you can multiply values of an array. The query would be similar to this :
let vals = (FOR v,e,p IN 1..2 OUTBOUND 'test/4719491'
testRel
RETURN {itemName:v.name, items:SUM(p.edges[*].qty)})
for val in vals
collect itemName = val.itemName Aggregate items = sum(val.items)
return {itemName: itemName, items: items}
So your function would replace the SUM in the inner sub-select. Here is the documentation on custom functions

Nodejs Order of properties guarantee

Note: I am using Nodejs which may or may not have subtle differences from vanilla ECMAscript's standards.
I have always heard that when using a for-each loop to iterate over the properties of an object, I should not count on the properties being in the same order. (even though in practice I have never seen a case where the objects were iterated over in a different order). In production, we have what I believe to be a typo where an object is created with an overwritten property.
var obj = {
a: 'a property',
b: 'another property',
c: 'yet another property',
a: 'woah we have another a?'
}
In Nodejs, am I guaranteed that the second a property containing the string 'woah we have another a?' will ALWAYS shadow the first a property containing the string 'a property'?
(even though in practice I have never seen a case where the objects were iterated over in a different order)
The following should give you a different order in V8 atleast.
var obj = {
"first":"first",
"2":"2",
"34":"34",
"1":"1",
"second":"second"
};
for (var i in obj) { console.log(i); };
// Order listed:
// "1"
// "2"
// "34"
// "first"
// "second"
As discussed here
ECMA-262 does not specify enumeration order. The de facto standard is to match
insertion order, which V8 also does, but with one exception:
V8 gives no guarantees on the enumeration order for array indices (i.e., a property
name that can be parsed as a 32-bit unsigned integer).
Remembering the insertion order for array indices would incur significant memory
overhead.
Though the above says the enumeration order is not specified but that is after the object is created. I think we can safely assume that insertion order should remain consistent, as there is no point for any engine to do otherwise and alter the insertion order.
var obj = {
"first":"first",
"2":"2",
"34":"34",
"1":"1",
"second":"second",
2: "two"
};
// gives result
{ '1': '1',
'2': 'two',
'34': '34',
first: 'first',
second: 'second' }

Resources