Socket IO socket ids different on client after updating version (EasyRTC) - node.js

I'm using socket.io with easyrtc to have a p2p video chat. Working example at https://github.com/merictaze/enlargify with following package versions
"express": "^4.15.2",
"easyrtc": "1.0.x", // easyrtc#1.0.15
"socket.io": "^1.4.5"
The easyrtc logic used is at https://github.com/merictaze/enlargify/blob/master/public/resources/js/app.js
However, if i bump up easyrtc version to 1.1 The code stops working. I've even tried the beta branch.
"express": "^4.15.2",
"easyrtc": "priologic/easyrtc#beta",
"socket.io": "^1.4.5"
I know this isn't helping much, so on further investigation i found out that it fails at this call
easyrtc.call(self.partnerId, successCB, failureCB, acceptedCB);
The Error code from failureCB is
MSG_REJECT_TARGET_EASYRTCID
On the server side the log shows
2017-12-07T07:02:40.477Z - debug - EasyRTC: [enlargify_app][fNhseVCWzi8XXhn5] EasyRTC command received with msgType [offer] undefined
2017-12-07T07:02:40.478Z - warning - EasyRTC: Attempt to request non-existent connection key: '0xv7UpIAlVeAzEedAAAA' undefined
2017-12-07T07:02:40.479Z - warning - EasyRTC: [enlargify_app][fNhseVCWzi8XXhn5] Could not send WebRTC signal to client [0xv7UpIAlVeAzEedAAAA]. They may no longer be online. undefined
However, reverting the easyrtc version back in package.json works as it does on the demo here http://enlargify.herokuapp.com/
I want to update the easyrtc version because of the safari support in the beta branch. I found the demos working smoothly.
ps. I did update the socket.io version and updated the deprecated calls e.g
partnerSocket = io.sockets.socket(socket.partnerId);
partnerSocket.emit("disconnect_partner", socket.id);
to
io.to(socket.partnerId).emit("disconnect_partner", socket.id);
Further investigations show that the socket.id generated on the client side is different from the one on server. That's why the two peers are unable to connect.
Any idea how i can get success function of easyrtc.connect to return the correct socketID?

Answering my own question here so if someone else stumbles upon this, they won't waste hours like i did.
The reason for the difference in Ids on client and server is because prior to version of easyrtc#1.0.15, easyrtc relied on SocketIO's ID and used it as the EASYRTCID as well. This meant that both socket.id and easyrtcid can be referenced interchangeably. That's why it worked in old versions.
As explained at https://github.com/priologic/easyrtc/issues/185 they changed the functionality and made EASYRTCID based on a new pattern. As signaling server (socketIO) would emit to socket.id while easyrtc when initiating the calls would use easyrtcid. Hence business logic is needed to pass easyrtcids between peers via socket in order to make easryrtc call.
In addition to above we also need to tell the easyrtc object to use the socket instance of the signaling server. I followed this example https://demo.easyrtc.com/demos/demo_instant_messaging_selfconnect.html

Related

Vertx cluster member connection breaks on hazelcast uuid reset

I am working on a project that is composed of multiple vertx micro services where each service runs on different containers in Openshift platform. Eventbus is used for communication between services.
Sometime when a request is made via eventbus there is no response and failing with below errors
[vert.x-eventloop-thread-2] DEBUG io.vertx.core.eventbus.impl.clustered.ConnectionHolder - tx.id=ea60ebe0-1d81-4041-80d5-79cbe1d2a11c Not connected to server
[vert.x-eventloop-thread-2] WARN io.vertx.core.eventbus.impl.clustered.ConnectionHolder - tx.id=97441ebe-8ce9-42b2-996d-35455a5b32f2 Connecting to server 65c9ab20-43f8-4c59-8455-ecca376b71ac failed
Whenever this happen I can see the below error in the destination server to which above request was made
message=WARNING: [192.168.33.42]:5701 [cdart] [5.0.3] Resetting local member UUID. Previous: 65c9ab20-43f8-4c59-8455-ecca376b71ac, new: 8dd74cdf-e4c4-443f-a38e-3f6c36721795
Could this be due to reset event raised by Hazelcast is not handled in Vertx?
Vertx 4.3.5 version is used in this project.
This is a known issue that will be fixed in the forthcoming 4.4 release.

Syncing app state with clients using socketio

I'm running a node server with SocketIO which keeps a large object (app state) that is updated regularly.
All clients receive the object after connecting to the server and should keep it updated in real-time using the socket (read-only).
Here's what I have considered:
1:
Emit a delta of changes to the clients using diff after updates
(requires dealing with the reability of delivery and lost updates)
2:
Use the diffsync package (however it allows clients to push changes to the server, but I need updates to be unidirectional, i.e. server-->clients)
I'm confident there should be a readily available solution to deal with this but I was not able to find a definitive answer.
The solution is very easy. You must modify the server so that it accepts updates only from trusted clients.
let Server = require('diffsync').Server;
let receiveEdit = Server.prototype.receiveEdit
Server.receiveEdit = function(connection, editMessage, sendToClient){
if(checkIsTrustedClient(connection))
receiveEdit.call(this, connection, editMessage, sendToClient)
}
but
// TODO: implement backup workflow
// has a low priority since `packets are not lost` - but don't quote me on that :P
console.log('error', 'patch rejected!!', edit.serverVersion, '->',
clientDoc.shadow.serverVersion, ':',
edit.localVersion, '->', clientDoc.shadow.localVersion);
Second option is try find another solution based on jsondiffpatch

Channels keep increasing for every exchange.publish() in RabbitMQ with node-amqp library

I'm using node-amqp library for my nodejs project. I also posted the issue to it's github project page.
It keeps creating new channels and they stay idle forever. After an hour channels were ~12000. I checked the options for exchange and publish but so far I'm not even close to solution.
What's wrong with the code and/or is there any options/settings for rabbitmq server for the issue?
Here is the sample code:
connection.exchange("brcks-wfa",{type:'direct',durable:true}, function(exchange) {
setInterval(function() {
...
awS.forEach(function(wc){
...
nstbs.forEach(function(br){
...
BUpdate(brnewinfo,function(st){
if(st){
exchange.publish(route, brnewinfo,{contentType:"application/json"});
}
});
});
...
});
}, 4000);
});
There is a bug in node-amqp where channels are not closed. The rabbit MQ team no longer recommends using this library anymore, instead they are recommending ampq.node which is a bit more low-level and lets/requires you to handle channels manually.

Meteor: “Failed to receive keepalive! Exiting.”

I'm working on a project which uses Npm request package for making request to an API server. On getting response, the callback processes the returned response. During this response processing I get the error: Failed to receive keepalive! Exiting. The following code will help you understand.
request({url: 'http://api-link-from-where-data-is-to-be-fetched'
},
function (err,res,body) {
//The code for processing response
}
Anybody can help me please who knows how to resolve this issue?
This might help answer this for you:
https://github.com/meteor/meteor/issues/1302
The last post on that page says:
Note that this is just a behavior of the develop-mode meteor run (and any hosting environment that chooses to turn on the keepalive option, which probably isn't most of them), not a production issue. And in any case, if your Node process is churning CPU for seconds, it's not going to be able to respond to any network traffic.
this post might help you : Meteor error message: "Failed to receive keepalive! Exiting."
Removing autopublish with meteor remove autopublish and then writing my own publish and subscribe functions fixed the problem.

unable to connect to xmpp server using node-xmpp

Im working on getting node-xmpp working with a jabber server we have in house here. I was able to get it working with talk.google.com just fine, and i can connect to our internal server with adium or ichat just fine.
Node v0.6.14
CentOS 6.2 / 2.6.32
node-xmpp 0.3.2
OpenSSL 1.0.0
connect code
var j = new xmpp.Client({
jid : 'user#domain',
password : 'pass',
host : 'chat.domain'
});
After tracing through the code, it seems it gets stuck right after it tries to upgrade the connection to a secure connection. This occurs in starttls.js in the starttls function.
The pair.on('secure') event is never called, and even after i print out pair after a settimeout, its still not appearing to be authorized. At this point i dont see any data in or out.
After a long time sitting there (several minites) it prints out an error that looks like this
throw arguments[1]; // Unhandled 'error' event
^
Error: 139644497663968:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:s23_clnt.c:674:
at CleartextStream._pusher (tls.js:508:24)
at CleartextStream._push (tls.js:334:25)
at SecurePair.cycle (tls.js:734:20)
at EncryptedStream.write (tls.js:130:13)
at Socket.ondata (stream.js:38:26)
at Socket.emit (events.js:67:17)
at TCP.onread (net.js:367:14)
The server is using a self signed cert if that matters.
Any ideas?
Thanks!
This looks like you're sending a TLS handshake when the server isn't expecting it, so the server isn't sending its handshake back.
One possibility is that you're talking old-style TLS (handshake-first) to a server that implements start-TLS. In your real code, are you setting the legacySSL parameter? Are you sure you're talking to an XMPP server on the target box?
A wireshark trace would give us the data to be able to tell for sure.
I was experiencing the same issue: connection hangs while trying to perform a TLS handshake with one particular Openfire XMPP server installation (though others worked fine).
After nearly losing my mind, I ended up modifying starttls.js that ships with node-xmpp to use tls.connect() and forcing SSLv3 and to my surprise it worked.
Gist here: https://gist.github.com/jamescoletti/6591173
Hope this is useful to someone.

Resources