use rocket_db_pools::{deadpool_redis, Database};
#[derive(Database)]
#[database("redis_tests")]
pub struct TestRedisPool(deadpool_redis::Pool);
As above, we are connecting to redis using deadpool_redis in rocket_db_pools. The address of the connection is specified in Cargo.toml and Rocket.toml using tls for the cluster endpoint.
The cluster consists of one shard with three nodes in it.
Cargo.toml
[dependencies]
async-std = "1.11.0"
futures = "0.3.21"
http = "0.2.6"
serde = "1.0.136"
redis = { version =" 0.21.5", features = ["tls","tokio-native-tls-comp","async-std-tls-comp","cluster"]}
[dependencies.rocket]
version = "0.5.0-rc.1"
features = ["json"]
[dependencies.rocket_db_pools]
version = "0.1.0-rc.2"
features = ["deadpool_redis"]
Rocket.toml
[default.databases.dev_actionConfigures]
url = 'rediss://<cluster endpoint>:6379'
When I try to hmget in this state, I get the following error. It appears that perhaps the cluster connection is established, but there is an error when connecting to the subsequent nodes. How can this be resolved?
error log
redis.hmget err: An error was signalled by the server: 853 <node address>:6379
It works well with a single node in a shard of a cluster.
More than one node will result in an error.
Related
I'm new to cassandra.Our application uses cassandra-driver-core-3.1 to connect to Apache Cassandra.We've a requirement to connect to DSE 6.8.Can we use cassandra-driver-core-3.11.2 classes to connect to DSE6.8 instead of using DSE specific classes?
Cluster.Builder builder = Cluster.builder();
LinkedList<InetSocketAddress> localLinkedList = new LinkedList<InetSocketAddress>();
String host = "****";
String port = "9042";
localLinkedList.add(new InetSocketAddress(host, Integer.valueOf(port)));
builder.addContactPointsWithPorts(localLinkedList);
Session session = builder.build().connect("people");
DISCLAIMER: I know that this question was asked before multiple times, non of the answers helped me.
In our app, we implemented SSL for more security, but we sometimes get the mentioned error, the error is not persistent, meaning that the same route can success at time but other time it throws this error.
Nothing is displayed on the server logs.
This is how I setup SSL configuration in the app:
(client.httpClientAdapter as DefaultHttpClientAdapter).onHttpClientCreate = (HttpClient _client) {
SecurityContext sc = SecurityContext();
sc.setTrustedCertificatesBytes(sslCert.buffer.asInt8List());
HttpClient httpClient = HttpClient(context: sc);
httpClient.badCertificateCallback = (X509Certificate cert, String host, int port) => false;
httpClient.maxConnectionsPerHost = 10;
return httpClient;
};
client being a Dio instance
Our server is built using Node.js, with Apache on centOS
As AWS Lambda support Connection pooling as shown in the Link.
As per my requirement i will use trigger function with kafka. But the request to database is gonna happen so frequent that it can use high percentage of cpu. So to avoid that i want to use connection pooling or any other way to use same instance of database context.
Creating a new C# SqlConnection object on each function invocation has no bad performance implication because ADO.NET manages a SQL connection pool already for you. So if you close a connection, it is just put back into the pool which means you can use a connection like this:
using System.Data.SqlClient;
const string connectionString = "..."; // Better get this from a Key Vault
[FunctionName("MyFunction")]
public static void Run(...)
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open(); // Get a connection object from the pool
// Execute queries on connection
}
}
Similarly for Python and pyodbc, connection pooling is enabled by default here and a call to pyodbc.connect() can make use of the pool:
import azure.functions as func
import pyodbc
connectionstring = "DRIVER=[...];SERVER=[...]"
def main(events: List[func.EventHubEvent]):
connection = pyodbc.connect(connectionstring)
with connection:
with connection.cursor() as cursor:
cursor.execute(f"SELECT [...] FROM [...]")
columns = [column[0] for column in cursor.description]
result = [dict(zip(columns, row)) for row in cursor.fetchall()]
print(result)
connection.close()
There is also a JavaScript example here.
Edit: Changed this answer after a comment by #kiranpradeep
In my .NET Framework 4.6.1 application I am using StackExchange.Redis.StrongName 1.2.6 to connect to Azure Redis.
This is the code
public RedisContext(string connectionString = null)
{
if (connectionString == null) return;
Lazy<ConfigurationOptions> lazyConfiguration
= new Lazy<ConfigurationOptions>(() => ConfigurationOptions.Parse(connectionString));
var configuration = lazyConfiguration.Value;
configuration.SslProtocols = SslProtocols.Tls12;//just added
configuration.AbortOnConnectFail = false;
Lazy<ConnectionMultiplexer> lazyConnection =
new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(configuration));
_connectionMultiplexer = lazyConnection.Value;
LogProvider.IsDisabled = true;
var connectionEndpoints = _connectionMultiplexer.GetEndPoints();
_lockFactory = new RedisLockFactory(connectionEndpoints.Select(endpoint => new RedisLockEndPoint
{
EndPoint = endpoint,
Password = configuration.Password,
Ssl = configuration.Ssl
}));
}
In Azure, I have changed the Redis resource to use TLS1.2 and in code I have added this line:
configuration.SslProtocols = SslProtocols.Tls12;//just added
And now, nothing works anymore. This is the error I get in Application Insights:
Error connecting to Redis. It was not possible to connect to the redis server(s); ConnectTimeout
I have also tried to add ",ssl=True,sslprotocols=tls12" to the redis connection string, but with the same result.
Try referencing StackExchange.Redis instead of StackExchange.Redis.StrongName. I have done that in a few of my projects and now it works. However some 3rd party still use StrongName rather than the normal redis one. StackExchange.Redis.StrongName is now deprecated. https://github.com/Azure/aspnet-redis-providers/issues/107. I assume you are trying to connect to Azure Redis in relation to them stopping TLS 1.0 and 1.1 support?
I have setup Redis cluster in Google compute Engine by click to deploy option. Now i want to connect to this redis server from my node js code using 'ioredis' here is my code to connect to single instance of redis
var Redis = require("ioredis");
var store = new Redis(6379, 'redis-ob0g');//to store the keys
var pub = new Redis(6379, 'redis-ob0g');//to publish a message to all workers
var sub = new Redis(6379, 'redis-ob0g');//to subscribe a message
var onError = function (err) {
console.log('fail to connect to redis ',err);
};
store.on('error',onError);
pub.on('error',onError);
sub.on('error',onError);
And it worked. Now i want to connect to redis as cluster, so i change the code as
/**
* list of server in replica set
* #type {{port: number, host: string}[]}
*/
var nodes =[
{ port: port, host: hostMaster},
{ port: port, host: hostSlab1},
{ port: port, host: hostSlab2}
];
var store = new Redis.Cluster(nodes);//to store the keys
var pub = new Redis.Cluster(nodes);//to publish a message to all workers
var sub = new Redis.Cluster(nodes);//to subscribe a message channel
Now it throw this error:
Here is my Redis cluster in my google compute console:
Ok, I think there is a confusion here.
A Redis Cluster deployment is not the same than a number of standard Redis instances protected by Sentinel. Two very different things.
The click-to-deploy option of GCE deploys a number of standard Redis instances protected by Sentinel, not Redis Cluster.
ioredis can handle both kind of deployments, but you have to use the corresponding API. Here, you were trying to use the Redis Cluster API, resulting in this error (cluster related commands are not activated for standard Redis instances).
According to ioredis documentation, you are supposed to connect with:
var redis = new Redis({
sentinels: [{ host: hostMaster, port: 26379 },
{ host: hostSlab1, port: 26379 },
{ host: hostSlab2, port: 26379 } ],
name: 'mymaster'
});
Of course, check the sentinel ports and name of the master. ioredis will manage automatically the switch to a slave instance when the master fails, and sentinel will ensure the slave is promoted as master just before.
Note that since you use pub/sub, you will need several redis connections.