I am connecting through the P4Perl Api to a Perforce server and I wanted to know if the connection operation is aborted after a set period of time or I have to treat it from my Perl code.
use P4;
my $perforceObject = new P4;
$perforceObject->SetPort( 'test-1234:8080');
$perforceObject->SetUser( 'user1234');
# try to connect for 10s or abort unless the connection is aborted automatically
$perforceObject->Connect();
As far as I know Perforce doesn't have its own built-in timeout and depends instead on the OS's TCP/IP implementation. I'm pretty sure most systems will fail the initial connection before 10 seconds have elapsed so you should be fine without having to add your own kill switch.
Related
We are using following code to connect to our caches (in-memory and Redis):
settings
.WithSystemRuntimeCacheHandle()
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime)
.And
.WithRedisConfiguration(CacheManagerRedisConfigurationKey, connectionString)
.WithMaxRetries(3)
.WithRetryTimeout(100)
.WithJsonSerializer()
.WithRedisBackplane(CacheManagerRedisConfigurationKey)
.WithRedisCacheHandle(CacheManagerRedisConfigurationKey, true)
.WithExpiration(CacheManager.Core.ExpirationMode.Absolute, defaultExpiryTime);
It works fine, but sometimes machine is restarted (automatically by Azure where we host it) and after the restart connection to Redis fails with following exception:
Connection to '{connection string}' failed.
at CacheManager.Core.BaseCacheManager`1..ctor(String name, ICacheManagerConfiguration configuration)
at CacheManager.Core.BaseCacheManager`1..ctor(ICacheManagerConfiguration configuration)
at CacheManager.Core.CacheFactory.Build[TCacheValue](String cacheName, Action`1 settings)
at CacheManager.Core.CacheFactory.Build(Action`1 settings)
According to Redis FAQ (https://learn.microsoft.com/en-us/azure/redis-cache/cache-faq) part: "Why was my client disconnected from the cache?" it might happen after redeploy.
The question is
is there any mechanism to restore the connection after redeploy
is anything wrong in way we initialize the connection
We are sure the connection string is OK
Most clients (including StackExchange.Redis) usually connect / re-connect automatically after a connection break. However, your connect timeout setting needs to be large enough for the re-connect to happen successfully. Remember, you only connect once, so it's alright to give the system enough time to be able to reconnect. Higher connect timeout is especially useful when you have a burst of connections or re-connections after a blip causing CPU to spike and some connections might not happen in time.
In this case, I see RetryTimeout as 100. If this is the Connection timeout, check if this is in milliseconds. 100 milliseconds is too low. You might want to make this more like 10 seconds (remember it's a one time thing, so you want to give it time to be able to connect).
I'm having a very difficult time to automatically close Redis' client connections. It is most certainly a problem with the redis package:
(https://www.npmjs.com/package/redis)
Redis will keep connections alive until you close them or a timeout, that defaults to infinite, is reached.
I'm aware of this:
how do I kill idle redis clients
And even before reading on SO I tried to set the timeout config both by .conf file and by command line but none of them worked.
What actually happens is very weird:
If I run CLIENT LIST after the N seconds:
The same number of idle connections are still there
The client 'idle' info restarts from 0 (for example, if I set 10 seconds of timeout, after 11 seconds the idle info is 1).
The addr changes. The port number changes as it is creating new connections to not lose that client.
The client is actually attached to Node. It seems to be a new node process. If I keep the node app UP the connections won't die. If I close the app all connections created by it close. So redis module is probably 'restarting' the connection.
Any ideas on what I should do not force the 'resurection' of a killed client?
Obs: I understand I can close the client's connection with 'quit()', but I must assure it will get rid of any idle client that wasn't closed by the application after some time.
To prevent node redis from reconnecting, I passed to createClient the key retry_strategy.
it must be a function that retuns false or throws an Error
We have a set of micro-services that I'd like to load test in a manner that is consistent with how they are accessed.
After settling on Locust as my tool of choice, I found out that the TCP connection underpinning has connection pooling because I keep seeing messages like these:
WARNING/requests.packages.urllib3.connectionpool: Connection pool is full, discarding connection:
As I understand it, this message is telling me that it discards a connection from the pool that it manages. I assume that it still creates a new connection, and adds it in the place of the one that it discarded.
Is that what it does?
Does it do this without the connection failing?
I don't think that our micro-services keep any sessions open. The connections are made, from a far end, to our services, which provide a result, and then the connection is closed. So, the test is handling the connections in a way that's different than the services are used. Is there a way to get the requests lib to not use a pool, and go through the work of setting up and tearing down all connections made through it, each time?
Is there any reason why we wouldn't want to test this way?
If it is preferable to test with a connection pool, how should I anticipate the difference in load when it's not done this way in production?
That's correct. Unless you set the urllib3 pool to blocking, it will generate more connections than the pool is configured to hold, as needed, and then will discard them once the request is done.
This often happens when you have more threads using a pool than the number of connections the pool is configured to store. urllib3 takes a maxsize parameter (defaults to 1) which you can set to the number of threads you're running. For requests, you'll need to make a custom adapter to do this. See:
https://stackoverflow.com/a/18845952/187878
https://laike9m.com/blog/requests-secret-pool_connections-and-pool_maxsize,89/
That said, it's merely a warning which some people ignore, so it's not a failure. But if this happens a lot in production, that probably means you should tweak your configuration because creating/discarding new connections all the time is fairly costly.
In general, it's a good idea to re-use connections for this reason.
My suggestions would be in this order:
Re-use connections, or
Increase the number of connections that get pooled to match the number of threads, or
Disable the warning if you'd rather not deal with it.
I get the following error:
Connection timeout. No heartbeat received.
When accessing my meteor app (http://127.0.0.1:3000). The application has been moved over to a new pc with the same code base - and the server runs fine with no errors, and I can access the mongodb. What would cause the above error?
The problem seems to occur when the collection is larger. however I have it running on another computer which loads the collections instantaneously. The connection to to sock takes over a minute and grows in size, before finally failing:
Meteor's DDP implements Sockjs's Heartbeats used for long-polling. This is probably due to DDP Heartbeat's timeout default of 15s. If you access a large amount of data and it takes a lot of time, in your case, 1 minute, DDP will time out after being blocked long enough by the operation to prevent connections being closed by proxies (which can be worse), and then try to reconnect again. This can go on forever and you may never get the process completed.
You can try hypothetically disconnecting and reconnecting in short amount of time before DDP closes the connection, and divide the database access into shorter continuous processes which you can pick up on each iteration and see if the problem persists:
// while cursorCount <= data {
Meteor.onConnection(dbOp);
Meteor.setTimeout(this.disconnect, 1500); // Adjust timeout here
Meteor.reconnect();
cursorCount++;
}
func dbOp(cursorCount) {
// database operation here
// pick up the operation at cursorCount where last .disconnect() left off
}
However, when disconnected all live-updating will stop as well, but explicitly reconnecting might make up for smaller blocking.
See a discussion on this issue on Google groupand Meteor Hackpad
I'm running a redis / node.js server and had a
[Error: Auth error: Error: ERR max number of clients reached]
My current setup is, that I have a connection manager, that adds connections until the maximum number of concurrent connections for my heroku app (256, or 128 per dyno) is reached. If so, it just delivers an already existing connection. It's ultra fast and it's working.
However, yesterday night I got this error and I'm not able to reproduce it. It may be a rare error and I'm not sleeping well, knowing it's out there. Because: Once the error is thrown, my app is no longer reachable.
So my questions would be:
is that kind of a connection manager a good idea?
would it be a better idea to use that manager to wait for 'idle' to be called and the close the connection, meaning that I had to reestablish a connection everytime a requests kicks in (this is what I wanted to avoid)
how can I stop my app from going down? Should i just flush the connection pool whenever an error occurs?
What are your general strategies for handling multiple concurrent connections with a given maximum?
In case somebody is reading along:
The error was caused by a messed up redis 0.8.x that I deployed to live:
https://github.com/mranney/node_redis/issues/251
I was smart enough to remove the failed connections from the connection pool but forgot to call '.quit()' on it, hence the connection was out there in the wild but still a connection.