I get the following error:
Connection timeout. No heartbeat received.
When accessing my meteor app (http://127.0.0.1:3000). The application has been moved over to a new pc with the same code base - and the server runs fine with no errors, and I can access the mongodb. What would cause the above error?
The problem seems to occur when the collection is larger. however I have it running on another computer which loads the collections instantaneously. The connection to to sock takes over a minute and grows in size, before finally failing:
Meteor's DDP implements Sockjs's Heartbeats used for long-polling. This is probably due to DDP Heartbeat's timeout default of 15s. If you access a large amount of data and it takes a lot of time, in your case, 1 minute, DDP will time out after being blocked long enough by the operation to prevent connections being closed by proxies (which can be worse), and then try to reconnect again. This can go on forever and you may never get the process completed.
You can try hypothetically disconnecting and reconnecting in short amount of time before DDP closes the connection, and divide the database access into shorter continuous processes which you can pick up on each iteration and see if the problem persists:
// while cursorCount <= data {
Meteor.onConnection(dbOp);
Meteor.setTimeout(this.disconnect, 1500); // Adjust timeout here
Meteor.reconnect();
cursorCount++;
}
func dbOp(cursorCount) {
// database operation here
// pick up the operation at cursorCount where last .disconnect() left off
}
However, when disconnected all live-updating will stop as well, but explicitly reconnecting might make up for smaller blocking.
See a discussion on this issue on Google groupand Meteor Hackpad
Related
We are using following code to connect to our caches (in-memory and Redis):
settings
.WithSystemRuntimeCacheHandle()
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime)
.And
.WithRedisConfiguration(CacheManagerRedisConfigurationKey, connectionString)
.WithMaxRetries(3)
.WithRetryTimeout(100)
.WithJsonSerializer()
.WithRedisBackplane(CacheManagerRedisConfigurationKey)
.WithRedisCacheHandle(CacheManagerRedisConfigurationKey, true)
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime);
It works fine, but sometimes machine is restarted (automatically by Azure where we host it) and after the restart connection to Redis fails with following exception:
Connection to '{connection string}' failed.
at CacheManager.Core.BaseCacheManager`1..ctor(String name, ICacheManagerConfiguration configuration)
at CacheManager.Core.BaseCacheManager`1..ctor(ICacheManagerConfiguration configuration)
at CacheManager.Core.CacheFactory.Build[TCacheValue](String cacheName, Action`1 settings)
at CacheManager.Core.CacheFactory.Build(Action`1 settings)
According to Redis FAQ (https://learn.microsoft.com/en-us/azure/redis-cache/cache-faq) part: "Why was my client disconnected from the cache?" it might happen after redeploy.
The question is
is there any mechanism to restore the connection after redeploy
is anything wrong in way we initialize the connection
We are sure the connection string is OK
Most clients (including StackExchange.Redis) usually connect / re-connect automatically after a connection break. However, your connect timeout setting needs to be large enough for the re-connect to happen successfully. Remember, you only connect once, so it's alright to give the system enough time to be able to reconnect. Higher connect timeout is especially useful when you have a burst of connections or re-connections after a blip causing CPU to spike and some connections might not happen in time.
In this case, I see RetryTimeout as 100. If this is the Connection timeout, check if this is in milliseconds. 100 milliseconds is too low. You might want to make this more like 10 seconds (remember it's a one time thing, so you want to give it time to be able to connect).
tl;dr: Mail-listener2 appears to timeout, and I want to continually listen for emails without requiring my script to restart.
I'm using the mailer-listerner2 package ( https://github.com/chirag04/mail-listener2/ ) in my node.js project. I would like to continually listen for emails arriving into a particular inbox and then parse these emails for further processing.
I have a connection established as well as my parsing all working, however I am seeing that the imap connection appears to timeout, or at least becomes unresponsive to new emails arriving.
As the mail-listener2 package relies on the imap npm package I have taken a look through the code and attempted to reduce the IDLE timer so that it sends a request to the imap (gmail) servers every 10 seconds instead of once every 30 minutes.
This indeed improved the situation however when waking this morning to check the logs I see the following:
<= 'IDLE OK IDLE terminated (Success)'
=> 'IDLE IDLE'
<= '+ idling'
=> DONE
<= 'IDLE OK IDLE terminated (Success)'
=> 'IDLE IDLE'
<= '+ idling'
[connection] Ended
[connection] Closed
The connection ended & closed appear to come from the core imap module. I thought sending an IDLE check would ensure that the disconnect does not happen, but as you can see this is not the case.
I have also tried looking into Noop but it appears to cause some other issues with mails being read twice.
I understand that if my timers are too low e.g. every few seconds this can cause mails to continually be parsed due to the calls blocking the server responses, which may be why I am seeing the Noop issue above.
Without wanting to go off and keep experimenting with this I'd like to know if others have hit this issue and overcome?
For anyone interested - I've pulled together a bunch of the mail-listener2 forks. Some of these had approached the reconnection issue so I refactored this slightly into a single implementation. I've also pulled together a few other bits not relevant to this issue.
https://www.npmjs.com/package/mail-listener-fixed
https://github.com/Hardware-Hacks/mail-listener-fixed/
I'm having a very difficult time to automatically close Redis' client connections. It is most certainly a problem with the redis package:
(https://www.npmjs.com/package/redis)
Redis will keep connections alive until you close them or a timeout, that defaults to infinite, is reached.
I'm aware of this:
how do I kill idle redis clients
And even before reading on SO I tried to set the timeout config both by .conf file and by command line but none of them worked.
What actually happens is very weird:
If I run CLIENT LIST after the N seconds:
The same number of idle connections are still there
The client 'idle' info restarts from 0 (for example, if I set 10 seconds of timeout, after 11 seconds the idle info is 1).
The addr changes. The port number changes as it is creating new connections to not lose that client.
The client is actually attached to Node. It seems to be a new node process. If I keep the node app UP the connections won't die. If I close the app all connections created by it close. So redis module is probably 'restarting' the connection.
Any ideas on what I should do not force the 'resurection' of a killed client?
Obs: I understand I can close the client's connection with 'quit()', but I must assure it will get rid of any idle client that wasn't closed by the application after some time.
To prevent node redis from reconnecting, I passed to createClient the key retry_strategy.
it must be a function that retuns false or throws an Error
I feel like this question would have been asked before, but I can't find one. Pardon me if this is a repeat.
I'm building a service on Node.js hosted in Heroku and using MongoDB hosted by Compose. Under heavy load, the latency is most likely to come from the database, as there is nothing very CPU-heavy in the service layer. Thus, when MongoDB is overloaded, I want to return an HTTP 503 promptly instead of waiting for a timeout.
I'm also using REDIS, and REDIS has a feature where you can check the number of queued commands (redisClient.command_queue.length). With this feature, I can know right away if REDIS is backed up. Is there something similar for MongoDB?
The best option I have found so far is polling the server for status via this command, but (1) I'm hoping for something client side, as there could be spikes within the polling interval that cause problems, and (2) I'm not actually sure what part of the status response I want to act on. That second part brings me to a follow up question...
I don't fully understand how the MondoDB client works with the server. Is one connection shared per client instance (and in my case, per process)? Are queries and writes queued locally or on the server? Or, is one connection opened for each query/write, until the database's connection pool is exhausted? If the latter is the case, it seems like I might want to keep an eye on the open connections. Does the MongoDB server return such information at other times, besides when polled for status?
Thanks!
MongoDB connection pool workflow-
Every MongoClient instance has a built-in connection pool. The client opens sockets on demand to support the number of concurrent MongoDB operations your application requires. There is no thread-affinity for sockets.
The client instance, opens one additional socket per server in your MongoDB topology for monitoring the server’s state.
The size of each connection pool is capped at maxPoolSize, which defaults to 100.
When a thread in your application begins an operation on MongoDB, if all other sockets are in use and the pool has reached its maximum, the thread pauses, waiting for a socket to be returned to the pool by another thread.
You can increase maxPoolSize:
client = MongoClient(host, port, maxPoolSize=200)
By default, any number of threads are allowed to wait for sockets to become available, and they can wait any length of time. Override waitQueueMultiple to cap the number of waiting threads. E.g., to keep the number of waiters less than or equal to 500:
client = MongoClient(host, port, maxPoolSize=50, waitQueueMultiple=10)
Once the pool reaches its max size, additional threads are allowed to wait indefinitely for sockets to become available, unless you set waitQueueTimeoutMS:
client = MongoClient(host, port, waitQueueTimeoutMS=100)
Reference for connection pooling-
http://blog.mongolab.com/2013/11/deep-dive-into-connection-pooling/
I'm running a redis / node.js server and had a
[Error: Auth error: Error: ERR max number of clients reached]
My current setup is, that I have a connection manager, that adds connections until the maximum number of concurrent connections for my heroku app (256, or 128 per dyno) is reached. If so, it just delivers an already existing connection. It's ultra fast and it's working.
However, yesterday night I got this error and I'm not able to reproduce it. It may be a rare error and I'm not sleeping well, knowing it's out there. Because: Once the error is thrown, my app is no longer reachable.
So my questions would be:
is that kind of a connection manager a good idea?
would it be a better idea to use that manager to wait for 'idle' to be called and the close the connection, meaning that I had to reestablish a connection everytime a requests kicks in (this is what I wanted to avoid)
how can I stop my app from going down? Should i just flush the connection pool whenever an error occurs?
What are your general strategies for handling multiple concurrent connections with a given maximum?
In case somebody is reading along:
The error was caused by a messed up redis 0.8.x that I deployed to live:
https://github.com/mranney/node_redis/issues/251
I was smart enough to remove the failed connections from the connection pool but forgot to call '.quit()' on it, hence the connection was out there in the wild but still a connection.