Removing listener on removed nodes - node.js

I have a simple scenario where i have attached listener on dynamic path in Firebase.
a
.. b
..... c (dynamic multiple nodes)
..... c1
......c2
ref.child(a).child(b).child(c).on('child_changed',onChildChange);
I am removing some c nodes as per some conditions, so do i need to listener from it or will it be automatically removed.

Just the opposite of the .on:
ref.child(a).child(b).child(c).off('child_changed',onChildChange);
https://www.firebase.com/docs/web/api/query/off.html

You have to call remove on a reference to remove data.
firebase.database().ref('a/b/c').remove();
child events are used to monitor data in the list, you can use these events to know when data has been added, modified or removed.
In your case, you should use child_removed, note that using this event has nothing to do with data removal. Whenever you work with lists, it is recommended that you use all 3 child events in conjunction.
firebase.database().ref('a/b/c').on('child_removed', function(data) {
//Data has been deleted, do something here!
});

Related

Firebase multi-path update set a parent node to null

I am trying to use firebase realtime database multi path updates.
However trying to set a parent node to null as below will result on an error.
const firebaseUpdate = {}
firebaseUpdate[`user/${uid}`] = null
db.ref().update(firebaseUpdate)
Error: Reference.update failed: First argument contains a path /user/USER_ID
that is ancestor of another path /user/USER_ID/creationTime
I was wondering if there is a way to use multi-path updates in order to set a parent node with multiple children to null.
I assume I could use remove or set function but I'd rather use the multi-path update.
The error message indicates that you're trying to apply two conflicting updates to the database in one operation. As the message says, your update tries to:
write to /user/USER_ID
write to /user/USER_ID/creationTime
The second one write is a child of the first one. Since the order of writes in a multi-location is unspecified, it's impossible to say what the outcome of the write operation will be.
If you want to replace any data that currently exists at /user/USER_ID with the creationTime, you should update it like this:
db.ref().update({
"/user/USER_ID": { creationTime: Date.now() }
})

Avoid duplicate entires when Inserting Excel or CSV-like entries into a neo4j graph

I have the following .xslx file:
My software regardless tis language will return the following graph:
My software iterates line by line and on each line iteration executes the following query
MERGE (A:POINT {x:{xa},y:{ya}}) MERGE (B:POINT {x:{xb},y:{yb}}) MERGE (C:POINT {x:{xc},y:{yc}}) MERGE (A)-[:LINKS]->(B)-[:LINKS]->(C) MERGE (C)-[:LINKS]->(A)
Will this avoid by inserting duplicate entries?
According to this question, yes it will avoid writing duplicate entries.
The query above will match any existing nodes and it will avoid to write duplicates.
A good rule of thumb is on each node that it may be a duplicate write it into a seperate MERGE query and afterwards write the merge statements for each relationship between 2 nodes.
Update
After some experiece when using asyncronous technologies such nodejs or even parallel threads you must verify that you read the next line AFTER you inserted the previous one. The reason why is because is that doing multiple insertions asyncronously may result having multiple nodes into your graph that are actually the same ones.
In node.js project of mine I read the excell file like:
const iterateWorksheet=function(worksheet,maxRows,row,callback){
process.nextTick(function(){
//Skipping first row
if(row==1){
return iterateWorksheet(worksheet,maxRows,2,callback);
}
if(row > maxRows){
return;
}
const alphas=_.range('A'.charCodeAt(0),config.excell.maxColumn.charCodeAt(0));
let rowData={};
_.each(alphas,(column) => {
column=String.fromCharCode(column);
const item=column+row;
const key=config.excell.columnMap[column];
if(worksheet[item] && key ){
rowData[key]=worksheet[item].v;
}
});
// The callback is the isertion over a neo4j db
return callback(rowData,(error)=>{
if(!error){
return iterateWorksheet(worksheet,maxRows,row+1,callback);
}
});
});
}
As you see I visit the next line when I successfully inserted the previous one. I find no way yet to serialize the inserts like most conventional RDBMS's does.
In case or web or server applications another UNTESTED approach is to use queue servers such as RabbitMQ or similar in order to queue the queries. Then the code responsimble for insertion will read from the queue so the whole isolation should be in the queue.
Furthermore ensure that all inserts are into a transaction.

How do I identify specific entity within a FlxGroup from FlxG.collide?

How do I make it so that when a bullet from the bullet group collides with an enemy from the enemy group, only the two hitting eachother will get affected?
I tried doing (In playstate):
if (FlxG.collide(bullet, enemy)){
bullet.kill();
enemy.kill();
}
But the only thing this succeeded in doing is killing the entire group. How do I only kill the ones affected?
In the Haxeflixel API docs:
collide(?ObjectOrGroup1:FlxBasic, ?ObjectOrGroup2:FlxBasic, ?NotifyCallback:Dynamic‑>Dynamic‑>Void):Bool
so I think you can use something like:
FlxG.collide(
groupBullets,
groupEnemies,
function (bullet:FlxObject, enemy:FlxObject):Void {
enemy.kill();
bullet.kill();
}
);
You want to pass in a notification callback:
https://github.com/HaxeFlixel/flixel/blob/24529ac96d4ad49a5f0c7e64799d0197cee9049e/flixel/FlxG.hx#L395
So something like this is what you want:
FlxG.collide(bulletGroup, enemyGroup, collideBulletEnemy));
function collideBulletEnemy(bullet:FlxObject, enemy:FlxObject):Void
{
bullet.kill();
enemy.kill();
}
Some more explanation:
The collide() function in flixel lets you pass in either an object or a group to either parameter, and tells you if those two things collide. In the case of two objects, you can directly follow that test up with logic operating on those two objects. But if one of the objects is a group, you don't know based on the test alone which things collided, so you need to rely on a callback you supply yourself to get that specific information.

Drupal 6 - is node_submit() needed when saving node?

I'm trying to fix problem in some legacy code which is generating nodes of custom content type "show", but only if node in same type and with same title doesn't exist already. Code looks like:
$program = node_load(array('title' => $xml_node->program_title, 'type' => 'show'));
if (!$program) {
$program = new stdClass();
$program->type = 'show';
...
node_submit($program);
node_save($program);
}
So, script is first trying to load node in 'show' content type with specific title and if it fails it creates one.
Problem is, when it's called multiple times in short period of time (inside a loop) it creates double nodes. Like 2 shows with the same title created in same second?!?
What can be the problem there?
I was looking examples for how to save node in Drupal 6. In some they don't even call node_submit() . Is that call needed? If so, do I maybe have to pass to node_save() what node_submit() returned? Or maybe node_load() fails to load existing node for some reason? Maybe some cache has to be cleared or something?
As far as i know and used node_save to create nodes programmaticly there is no need for the node_submit() function.
The reason that double nodes are created is that the node_load() function fired before completing the updates to the node_load() cache. Try to add:
node_load(FALSE, NULL, TRUE);
after node_save($program).
this will clear the node_load() cache.
see:
https://api.drupal.org/comment/12084#comment-12084

Azure RoleInstance Id when scaling out, RoleEnvironmentTopologyChange not fired

Will the first instance deployed always end with a zero ? 0. like "xxxx_IN_0"
When scaling up to X instanses, will the next instanses always get 1 2 3 4 as the last number. ( I think so).
What happens when I scale down again? I read that it will take at random one of the instances. So when scaling down to 1 instance, I cant assume that I know what ID is the one that still are running?.
Anyone who have been playing with the IDs when scaling up and down and know those things?
The reason for me asking is that I have some extra logic that I want to run on only one, not less or more, of the instances. If I can assume that the "xxx_IN_0" is always present then i can just do it with a simple check that last of ID is zero. If not, I am considering to check all ids and if the current instance is the lowest, then it will do its magic.
If the last case, are there an event i can monitor for when scaling up or down is done?.
update
From Answer:
if (RoleEnvironment.IsAvailable)
{
RoleEnvironment.Changed += RoleEnvironment_Changed;
}
void RoleEnvironment_Changed(object sender, RoleEnvironmentChangedEventArgs e)
{
var change = e.Changes.OfType<RoleEnvironmentTopologyChange>().FirstOrDefault();
if (change != null)
{
Trace.TraceInformation("RoleEnvironmentTopologyChange at RoleName '{0}'", change.RoleName);
}
I do not get any information in my tracelog when i scale up and down.
There have to be set a internal endpoint to trigger the events:
<InternalEndpoint name="InternalEndpoint1" protocol="http" />
http://blogs.msdn.com/b/windowsazure/archive/2011/01/04/responding-to-role-topology-changes.aspx
You should listen to the RoleEnvironment.Changed event which occurs when the service configuration is changed.
When you receive this event, check the Changes property for an instance of RoleEnvironmentTopologyChange. This change will be reported when the number of instances are changed.
To determine on which server you should run that extra logic you could examine the list of roles and from there find all instances. Sort instances by id and select the first to be the special one.
However, if it is critical that only a single server run this special logic at any time, you will need a more stable solution.

Resources