I want to connect from client side to two different node servers that are actually running on the same local ip address but on different ports. The problem is that the first io.connect() succeeds but the second one fails. I've read that in order to get it working, the second io.connect call should include the force new connection property set to true. I tried it but without much success... Here you are a pretty simplified version of my code:
dataSocket = io.connect('https://' + window.document.location.host);
...
socketOut = io.connect(data.url, {'force new connection': true});
Basically, the first connect opens a socket on https://192.168.1.129 (port 443) and the second one on https://192.168.1.129:3000 and it is this last one which fails. Both servers are running and accepting connections during these calls (obviously). The curious thing is that if I replace the private local address with localhost, it works. I'm definetely missing something...
Any suggestions?
It was a certificate issue (using a self-signed certificate for the CA). By just connecting to https://192.168.1.129:3000 one time via web browser and accepting the risk alert, everything works.
Related
I'm trying to write a test in Node for the behavior of a networking client when it fails to make a TCP connection to a given server. Ideally, I'd like this to be as close as possible to the ECONNREFUSED case rather than some other error like DNS lookup failure, connection closed before receiving a response, etc.
A method I've tried is to make a server that listens binding port 0, then close the server, then connect to the port that was chosen when the server listened. This mostly works, but in CI when many tests are running in parallel sometimes some other test claims the port that was just bound and closed.
If this were C, I could just not set SO_REUSEADDR when binding the port, which should prevent the port from being quickly reused. But as far as I can tell, there's no way in Node to create a listening socket without SO_REUSEADDR.
Any thoughts about achieving this goal? Things I've thought about but not quite gotten to work include:
finding an npm package that lets me setsockopt to turn off SO_REUSEADDR (though I suspect that once the bind has happened it's too late?)
finding some other mechanism that isn't net.Server to bind to a port without SO_REUSEADDR
finding a different mechanism of tricking the client into thinking the connection was refused
(That said, some of the tests I'm writing involve "first the connection works and a later connection doesn't" so ideally something that lets me actually have a real server would be great --- ie my first idea somehow!)
I am working on a nodejs app with Socket.io and I did a test in a single process using PM 2 and it was no errors. Then I move to our production environment(We use Google Cloud Compute Instance).
I run 3 app processes and a iOS client connects to the server.
By the way the iOS client doesn't keep the socket connection. It doesn't send disconnect to the server. But it's disconnected and reconnect to the server. It happens continuously.
I am not sure why the server disconnects the client.
If you have any hint or answer for this, I would appreciate you.
That's probably because requests end up on a different machine rather than the one they originated from.
Straight from Socket.io Docs: Using Multiple Nodes:
If you plan to distribute the load of connections among different processes or machines, you have to make sure that requests associated with a particular session id connect to the process that originated them.
What you need to do:
Enable session affinity, a.k.a sticky sessions.
If you want to work with rooms/namespaces you also need to use a centralised memory store to keep track of namespace information, such as the Redis/Redis Adapter.
But I'd advise you to read the documentation piece I posted, things might have changed a bit since the last time I've implemented something like this.
By default, the socket.io client "tests" out the connection to its server with a couple http requests. If you have multiple server requests and those initial http requests don't go to the exact same server each time, then the socket.io connect will never get established properly and will not switch over to webSocket and it will keep attempting to use http polling.
There are two ways to fix this.
You can configure your clients to just assume the webSocket protocol will work. This will initiate the connection with one and only one http connection which will then be immediately upgraded to the webSocket protocol (with socket.io running on top of that). In socket.io, this is a transport option specified with the initial connection.
You can configure your server infrastructure to be sticky so that a request from a given client always goes back to the exact same server. There are lots of ways to do this depending upon your server architecture and how the load balancing is done between your servers.
If your servers are keeping any client state local to the server (and not in a shared database that all servers access), then you will need even a dropped connection and reconnect to go back to the same server and you will need sticky connections as your only solution. You can read more about sticky sessions on the socket.io website here.
Thanks for your replies.
I finally figured out the issue. The issue was caused by TTL of backend service in Google Cloud Load Balancer. The default TTL was 30 seconds and it made each socket connection tried to disconnect and reconnect.
So I updated the value to 3600s and then I could keep the connection.
My application is running on ec2 instance. we are using node.js for server side code. We are using socket.io, express to connect the client side code.I have a requirement to capture user's browser ip and send it to server side code.
i have tried the below code but it is giving me the server IP details.
io.sockets.on('connection', function (socket) {
var socketId = socket.id;
var clientIp = socket.handshake.headers;
console.log('connection :', socket.request.connection._peername);
console.log(socket.request.client._peername.address);
console.log(clientIp);
});
Is there any ways to capture the user's browser IP, it will be great help.
I appreciate your suggestions.
In your client side code, you cannot tell what IP address you will be connecting from.
On the server side (express server), you can easily grab the remote IP address from the request, like so:
console.log(req.connection.remoteAddress);
Note that just like any other server, this only tells where you see the connection coming from - the user might be using a VPN, or behind a corporate firewall, etc., in which case the IP address may not have much meaning. Whether this matters to you depends on why you are trying to collect it (logging or something more meaningful).
Don't forget that if your express app is behind a web server (like nginx), you may need to look at the forwarded-for header instead - see related S.O. question How to get remote client address.
we are not yet using nginx. we are just running with nohup command in the background to up the server in ec2. I tried using the below command
console.log(request.socket.remoteAddress)
it gives me the server side ip not the client side ip. We are using this app :8000/index.html in our local system. i want to capture the local system ip. This console.log(req.connection.remoteAddress) gives me TypeError: Cannot read property 'remoteAddress' of undefined
As my app is currently under development, my local computer is temporarily acting as the server. Using the service by no-ip.com, I have managed to establish internet connection to the NodeJS server at my home, which has been supported by socket.io. However, although the HTTP connection is fine, every now and then the socket.io connection would fail until I restart the server. I have been investigating the cause of this. I wonder whether when the dynamic IP of the server changes, the socket.io which is listening to the ports fails. Could someone confirm this with me?
Its definitely not the answer of your question but i think you're gonna like it ! use NGROK download & install it
once it's done
launch your dev server
open cmd and type
c:\>ngrok http 3000
3000 is your dev server PORT so if its something else change it
This will give you an address like that https://xxx232xx.ngrok.io
use this address to access your app now from any connected device
One last thing , when you use socket.io with https domains change your config and add the address
Example
var ioSocket = io('https://xxx232xx.ngrok.io', {
'reconnection delay': 2500,
'secure':true,
'max reconnection attempts': 10,
'reconnection':true
});
Hope it helps !
I create a new Azure Redis Cache, which take almost 5 minutes to finish creating. I'm using the node-redis package, here's my code
var client = redis.createClient(
process.env.REDIS_PORT || 6379,
process.env.REDIS_HOST || '127.0.0.1'
);
if(process.env.REDIS_HOST) {
client.auth(process.env.REDIS_KEY);
}
Yes those environment variables are properly set, it just hang on for a while and raise an error: Redis connection to mycache.redis.cache.windows.net:6380 failed - read ECONNRESET.
Now, when I use the redis-cli to try to connect with redis-cli -h myhost -p 6380 -a the-auth-key it just hang at the command line and no connection seems to be established, but no error either. It's just doing nothing. If I change the port etc, I get connection error. So I'm currently wondering what I'm doing wrong?
I've created another redis cache on a different region an plan (I took the biggest, with 99.9 SLA etc). Still, no connection is possible.
Any help would be appreciated.
New caches only have the SSL endpoint (port 6380) enabled by default. Some clients (like redis-cli) do not support SSL. You would need to check if node-redis supports SSL. You will get erorrs if you try to connect to the SSL port with a client that doesn't support SSL.
If you need to use a client does not support SSL, there are two options. One is to create an SSL tunnel between the local machine and the Redis cache, using an application like 'stunnel'. I know stunnel works well for ad-hoc scenarios like redis-cli, but I'm not sure how it would perform under production load.
The second option is to enable the non-SSL endpoint (port 6379) in the Azure portal. However, we don't recommend this for production caches since your access key and data will all be sent in plaintext.