API doc for the JointJS library is here: http://www.jointjs.com/api
I'm using DEVS plugin for Elements with ports.
I need to restrict number of connections from a port to a single one.
Once a link is made from a port, user shouldn't be able to start a connection from the same port unless the existing connection is removed.
Is it possible without code changes in the library itself?
I was not able to get a hook/entry point to implement this requirement even after looking into API doc and the code itself.
Any help or pointers are appreciated.
PS:
unfortunately I'm not good at Backbone at the moment.
it was matter of setting magnet="passive" to the port in question, I guess. Just don't know how to do it. (graph is dynamic, not predefined links between elements)
I've been struggling with this all day. Setting the magnets to passive was not a good enough solution for me. What I've finally ended up with after digging through the source is using the validateMagnet function of the paper object. I get the port from the magnet, and then get all the outbound links from the source model. If any of the links are using the same point I reject the validation. Here's the code:
validateMagnet: function(cellView, magnet) {
// Prevent links from ports that already have a link
var port = magnet.getAttribute('port');
var links = graph.getConnectedLinks(cellView.model, { outbound: true });
var portLinks = _.filter(links, function(o) {
return o.get('source').port == port;
});
if(portLinks.length > 0) return false;
// Note that this is the default behaviour. Just showing it here for reference.
// Disable linking interaction for magnets marked as passive (see below `.inPorts circle`).
return magnet.getAttribute('magnet') !== 'passive';
},
It was as simple as getting element from graph and setting certain attribute on it.
var source = graph.getCell(sourceId);
source.attr('.outPorts circle/magnet', 'passive')
To limit inbound and outbound ports to one connection each, check the existing connected links when validating the potential connection:
new dia.Paper({
validateConnection: (cellViewS, magnetS, cellViewT, magnetT) => {
return !Boolean([
// Find existing outbound links from the source element.
...graph.getConnectedLinks(cellViewS.model, { outbound: true, inbound: false, })
.filter(link => link.get('source').port === magnetS.getAttribute('port')),
// Find existing inbound links to the target element.
...graph.getConnectedLinks(cellViewT.model, { outbound: false, inbound: true, }),
].length)
},
})
Related
For the context, I am developing a synthetic monitoring tool using Nodejs and puppeteer.
For each step of a defined scenario, I capture a screenshot, a waterfall and performance metrics.
My problem is on the waterfall, I previously used puppeter-har but this package is not able to capture request outside of a navigation.
Therefore I use this piece of code to capture all interesting requests :
const {harFromMessages} = require('chrome-har');
// Event types to observe for waterfall saving (probably overkill, I just set all events of Page and Network)
const observe = [
'Page.domContentEventFired',
'Page.fileChooserOpened',
'Page.frameAttached',
'Page.frameDetached',
'Page.frameNavigated',
'Page.interstitialHidden',
'Page.interstitialShown',
'Page.javascriptDialogClosed',
'Page.javascriptDialogOpening',
'Page.lifecycleEvent',
'Page.loadEventFired',
'Page.windowOpen',
'Page.frameClearedScheduledNavigation',
'Page.frameScheduledNavigation',
'Page.compilationCacheProduced',
'Page.downloadProgress',
'Page.downloadWillBegin',
'Page.frameRequestedNavigation',
'Page.frameResized',
'Page.frameStartedLoading',
'Page.frameStoppedLoading',
'Page.navigatedWithinDocument',
'Page.screencastFrame',
'Page.screencastVisibilityChanged',
'Network.dataReceived',
'Network.eventSourceMessageReceived',
'Network.loadingFailed',
'Network.loadingFinished',
'Network.requestServedFromCache',
'Network.requestWillBeSent',
'Network.responseReceived',
'Network.webSocketClosed',
'Network.webSocketCreated',
'Network.webSocketFrameError',
'Network.webSocketFrameReceived',
'Network.webSocketFrameSent',
'Network.webSocketHandshakeResponseReceived',
'Network.webSocketWillSendHandshakeRequest',
'Network.requestWillBeSentExtraInfo',
'Network.resourceChangedPriority',
'Network.responseReceivedExtraInfo',
'Network.signedExchangeReceived',
'Network.requestIntercepted'
];
At the start of the step :
// list of events for converting to HAR
const events = [];
client = await page.target().createCDPSession();
await client.send('Page.enable');
await client.send('Network.enable');
observe.forEach(method => {
client.on(method, params => {
events.push({ method, params });
});
});
At the end of the step :
waterfall = await harFromMessages(events);
It works good for navigation events, and also for navigation inside a web application.
However, the web application I try to monitor has iframes with the main content.
I would like to see the iframes requests into my waterfall.
So a few question :
Why is Network.responseReceived or any other event doesn't capture this requests ?
Is it possible to capture such requests ?
So far I've red the devtool protocol documentation, nothing I could use.
The closest to my problem I found is this question :
How can I receive events for an embedded iframe using Chrome Devtools Protocol?
My guess is, I have to enable the Network for each iframe I may encounter.
I didn't found any way to do this. If there is a way to do it with devtool protocol, I should have no problem to implement it with nodsjs and puppeteer.
Thansk for your insights !
EDIT 18/08 :
After more searching on the subject, mostly Out-of-process iframes, lots of people on the internet point to that response :
https://bugs.chromium.org/p/chromium/issues/detail?id=924937#c13
The answer is question states :
Note that the easiest workaround is the --disable-features flag.
That said, to work with out-of-process iframes over DevTools protocol,
you need to use Target [1] domain:
Call Target.setAutoAttach with flatten=true;
You'll receive Target.attachedToTarget event with a sessionId for the iframe;
Treat that session as a separate "page" in chrome-remote-interface. Send separate protocol messages with additional sessionId field:
{id: 3, sessionId: "", method: "Runtime.enable", params:
{}}
You'll get responses and events with the same "sessionId" field which means they are coming from that frame. For example:
{sessionId: "", method: "Runtime.consoleAPICalled",
params: {...}}
However I'm still not able to implement it.
I'm trying this, mostly based on puppeteer :
const events = [];
const targets = await browser.targets();
const nbTargets = targets.length;
for(var i=0;i<nbTargets;i++){
console.log(targets[i].type());
if (targets[i].type() === 'page') {
client = await targets[i].createCDPSession();
await client.send("Target.setAutoAttach", {
autoAttach: true,
flatten: true,
windowOpen: true,
waitForDebuggerOnStart: false // is set to false in pptr
})
await client.send('Page.enable');
await client.send('Network.enable');
observeTest.forEach(method => {
client.on(method, params => {
events.push({ method, params });
});
});
}
};
But I still don't have my expected output for the navigation in a web application inside an iframe.
However I am able to capture all the requests during the step where the iframe is loaded.
What I miss are requests that happened outside of a proper navigation.
Does anyone has an idea about the integration into puppeteer of that chromium response above ? Thanks !
I was looking on the wrong side all this time.
The chrome network events are correctly captured, as I would have seen earlier if I checked the "events" variable earlier.
The problem comes from the "chrome-har" package that I use on :
waterfall = await harFromMessages(events);
The page expects the page and iframe main events to be present in the same batch of event than the requests. Otherwise the request "can't be mapped to any page at the moment".
The steps of my scenario being sometimes a navigation in the same web application (=no navigation event), I didn't have these events and chrome-har couldn't map the requests and therefore sent an empty .har
Hope it can help someone else, I messed up the debugging on this one...
I am trying to prevent who can see events for a particular page, so I am determining and storing the ids of which users have access to a particular page. In channels.js, I can add users to the new page-level channel in app.on('connection') by iterating through the pages like this: app.channel('page-' + pageId).join(connection)
The problem is that a user doesn't start getting broadcasts from that page until after they refresh the browser and re-connect.
What I want to happen is have all connections for a an allowed user start getting broadcasts when the page is created. Is there a way to do that in channels.js, or can I tell it who to start broadcasting to in a hook for the page creation?
Editing to add the last thing I tried. "Users-pages" is an associative entity that links Users and Pages.
app.service('users-pages').on('created', userPage => {
const { connections } = app.channel(app.channels).filter(connection =>
connection.user.id === userPage.userId
)
app.channel('page-' + userPage.pageId).join(connections)
})
The keeping channels updated documentation should have the answer you are looking for. It uses the users service but can be updated for pages accordingly which could look like this:
app.service('pages').on('created', page => {
// Assuming there is a reference of `users` on the page object
page.users.forEach(user => {
// Find all connections for this user
const { connections } = app.channel(app.channels).filter(connection =>
connection.user._id === user._id
);
app.channel('page-' + pageId).join(connections);
});
});
Using app.channel('page-' + pageId).join(connections) wasn't working for me. But looping through each connection works fine:
connections.forEach(connection => {
app.channel('page-' + userPage.pageId).join(connection)
})
Cheers guys and gals.
Having some problems with $batching requests to SP from SPFx.
Some background: The SP structure has one site collection with lots of subsites. Each subsite has a list whose name is identical on all subsites. I need to access all of those lists.
A normal SPHttpClient call gives me the url of all of the sites. So far so good.
The plan was then to $batch the calls to get the data from the lists. Unfortunatly I only get the answer from one of the calls. The rest of the batched calls gives me "InvalidClientQueryException". If I change the order of the calls it seems like only the first call succeeds.
const spBatchCreationOptions: ISPHttpClientBatchCreationOptions = {
webUrl: absoluteUrl
};
const spBatch: SPHttpClientBatch = spHttpClient.beginBatch(spBatchCreationOptions);
// Add three calls to the batch
const dan1 = spBatch.get("<endpoint1>",SPHttpClientBatch.configurations.v1);
const dan2 = spBatch.get("<endpoint2>",SPHttpClientBatch.configurations.v1);
const dan3 = spBatch.get("<endpoint3>",SPHttpClientBatch.configurations.v1);
// Execute the batch
spBatch.execute().then(() => {
dan1.then((res1) => {
return res1.json().then((res10) => {
console.log(res10);
});
});
dan2.then((res2) => {
return res2.json().then((res20) => {
console.log(res20);
});
});
dan3.then((res3) => {
return res3.json().then((res30) => {
console.log(res30);
});
});
});
So in this case only the call dan1 succeeds. If I however change call2 to have an identical endpoint as the first call they both succeed.
I can't really wrap my head around this, so if someone has any input it would be much appreciated.
//Dan
Make sure that your endpoint is always the same site per one batch. You cannot mix different sites within one batch. In that case only the first call(s) will succeed which are from the same site.
To overcome that you might switch to a search call in case you wanna retrieve information (what you can do over the same site URL).
See my blogpost on that for further information.
I'm using ampq.node for my RabbitMQ access in my Node code. I'm trying to use either the publish or sendToQueue methods to include some metadata with my published message (namely timestamp and content type), using the options parameter.
But whatever I'm passing to options is completely ignored. I think I'm missing some formatting, or a field name, but I cannot find any reliable documentation (beyond the one provided here which does not seem to do the job).
Below is my publish function code:
var publish = function(queueName, message) {
let content;
let options = {
persistent: true,
noAck: false,
timestamp: Date.now(),
contentEncoding: 'utf-8'
};
if(typeof message === 'object') {
content = new Buffer(JSON.stringify(message));
options.contentType = 'application/json';
}
else if(typeof message === 'string') {
content = new Buffer(message);
options.contentType = 'text/plain';
}
else { //message is already a buffer?
content = message;
}
return Channel.sendToQueue(queueName, content, options); //Channel defined and opened elsewhere
};
What am I missing?
Update:
Turns out if you choose to use a ConfirmChannel, you must provide the callback function as the last parameter, or else, the options object is ignored. So once I changed the code to the following, I started seeing the options correctly:
Channel.sendToQueue(queueName, content, options, (err, result) => {...});
Somehow, I can't seem to get your example publish to work... though I don't see anything particularly wrong with it. I'm not sure why I wasn't able to get your example code working.
But I was able to modify a version of my own amqplib intro code, and got it working with your options just fine.
Here is the complete code for my example:
// test.js file
var amqplib = require("amqplib");
var server = "amqp://test:password#localhost/test-app";
var connection, channel;
function reportError(err){
console.log("Error happened!! OH NOES!!!!");
console.log(err.stack);
process.exit(1);
}
function createChannel(conn){
console.log("creating channel");
connection = conn;
return connection.createChannel();
}
function sendMessage(ch){
channel = ch;
console.log("sending message");
var msg = process.argv[2];
var message = new Buffer(msg);
var options = {
persistent: true,
noAck: false,
timestamp: Date.now(),
contentEncoding: "utf-8",
contentType: "text/plain"
};
channel.sendToQueue("test.q", message, options);
return channel.close();
}
console.log("connecting");
amqplib.connect(server)
.then(createChannel)
.then(sendMessage)
.then(process.exit, reportError);
to run this, open a command line and do:
node test.js "example text message"
After running that, you'll see the message show up in your "test.q" queue (assuming you have that queue created) in your "test-app" vhost.
Here's a screenshot of the resulting message from the RMQ Management plugin:
side notes:
I recommend not using sendToQueue. As I say in my RabbitMQ Patterns email course / ebook:
It took a while for me to realize this, but I now see the "send to queue" feature of RabbitMQ as an anti-pattern.
Sure, it's built in to the library and protocol. And it's convenient, right? But that doesn't mean you should use it. It's one of those features that exists to make demos simple and to handle some specific scenarios. But generally speaking, "send to queue" is an anti-pattern.
When you're a message producer, you only care about sending the message to the right exchange with the right routing key. When you're a message consumer, you care about the message destination - the queue to which you are subscribed. A message may be sent to the same exchange, with the same routing key, every day, thousands of times per day. But, that doesn't mean it will arrive in the same queue every time.
As message consumers come online and go offline, they can create new queues and bindings and remove old queues and bindings. This perspective of message producers and consumers informs the nature of queues: postal boxes that can change when they need to.
I also recommend not using amqplib directly. It's a great library, but it lacks a lot of usability. Instead, look for a good library on top of amqplib.
I prefer wascally, by LeanKit. It's a much easier abstraction on top of amqplib and provides a lot of great features and functionality.
Lastly, if you're struggling with other details in getting RMQ up and running with Node.js, designing your app to work with it, etc., check out my RabbitMQ For Devs course - it goes from zero to hero, fast. :)
this may help others, but the key name to use for content type is contentType in the javascript code. Using the web Gui for rabbitMQ, they use content_type as the key name. different key names to declare options, so make sure to use the right one in the right context.
I can get room's clients list with this code in socket.io 0.9.
io.sockets.clients(roomName)
How can I do this in socket.io 1.0?
Consider this rather more complete answer linked in a comment above on the question: https://stackoverflow.com/a/24425207/1449799
The clients in a room can be found at
io.nsps[yourNamespace].adapter.rooms[roomName]
This is an associative array with keys that are socket ids. In our case, we wanted to know the number of clients in a room, so we did Object.keys(io.nsps[yourNamespace].adapter.rooms[roomName]).length
In case you haven't seen/used namespaces (like this guy[me]), you can learn about them here http://socket.io/docs/rooms-and-namespaces/ (importantly: the default namespace is '/')
Updated (esp. for #Zettam):
checkout this repo to see this working: https://github.com/thegreatmichael/socket-io-clients
Using #ryan_Hdot link, I made a small temporary function in my code, which avoids maintaining a patch. Here it is :
function getClient(roomId) {
var res = [],
room = io.sockets.adapter.rooms[roomId];
if (room) {
for (var id in room) {
res.push(io.sockets.adapter.nsp.connected[id]);
}
}
return res;
}
If using a namespace :
function getClient (ns, id) {
return io.nsps[ns].adapter.rooms[id]
}
Which I use as a temporary fix for io.sockets.clients(roomId) which becomes findClientsSocketByRoomId(roomId).
EDIT :
Most of the time it is worth considering avoiding using this method if possible.
What I do now is that I usually put a client in it's own room (ie. in a room whose name is it's clientID). I found the code more readable that way, and I don't have to rely on this workaround anymore.
Also, I haven't tested this with a Redis adapter.
If you have to, also see this related question if you are using namespaces.
For those of you using namespaces I made a function too that can handle different namespaces. It's quite the same as the answer of nha.
function get_users_by_room(nsp, room) {
var users = []
for (var id in io.of(nsp).adapter.rooms[room]) {
users.push(io.of(nsp).adapter.nsp.connected[id]);
};
return users;
};
As of at least 1.4.5 nha’s method doesn’t work anymore either, and there is still no public api for getting clients in a room. Here is what works for me.
io.sockets.adapter.rooms[roomId] returns an object that has two properties, sockets, and length. The first is another object that has socketId’s for keys, and boolean’s as the values:
Room {
sockets:
{ '/#vQh0q0gVKgtLGIQGAAAB': true,
'/#p9Z7l6UeYwhBQkdoAAAD': true },
length: 2 }
So my code to get clients looks like this:
var sioRoom = io.sockets.adapter.rooms[roomId];
if( sioRoom ) {
Object.keys(sioRoom.sockets).forEach( function(socketId){
console.log("sioRoom client socket Id: " + socketId );
});
}
You can see this github pull request for discussion on the topic, however, it seems as though that functionality has been stripped from the 1.0 pre release candidate for SocketIO.