I am currently struggling to get something up and running on an nServiceBus hosted application. I have an azure ServiceBus queue that a 3rd party is posting messages to and I want my application (which is hosted locally at the moment) to receive these messages.
I have googled for answers on how to configure the endpoint but I have had no luck in a valid config. Has anyone ever done this as I can find examples of how to connect to Azure storage queues but NOT servicebus queue. (I need azure servicebus queues for other reasons)
The config I have is as below
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.UnicastBus()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage();
}
.
Message=Exception when starting endpoint, error has been logged. Reason: Input queue [mytimeoutmanager#sb://[*].servicebus.windows.net/] must be on the same machine as this Source=NServiceBus.Host
.
<configuration>
<configSections>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureServiceBusQueueConfig" type="NServiceBus.Config.AzureServiceBusQueueConfig, NServiceBus.Azure" />
<section name="AzureTimeoutPersisterConfig" type="NServiceBus.Timeout.Hosting.Azure.AzureTimeoutPersisterConfig, NServiceBus.Timeout.Hosting.Azure" />
</configSections>
<AzureServiceBusQueueConfig IssuerName="owner" QueueName="testqueue" IssuerKey="[KEY]" ServiceNamespace="[NS]" />
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<!-- Use the following line to explicitly set the Timeout manager address -->
<UnicastBusConfig TimeoutManagerAddress="MyTimeoutManager" />
<!-- Use the following line to explicity set the Timeout persisters connectionstring -->
<AzureTimeoutPersisterConfig ConnectionString="UseDevelopmentStorage=true" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
</startup>
</configuration>
Try moving UnicastBus() to the end of your call, like this:
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage()
.UnicastBus(); // <- Here
And about those third parties posting messages to the queue. Keep in mind that they need to respect how NServiceBus handles serialization/deserialization. Here is how this is done in NServiceBus (the most important part is that the BrokeredMessage is initialized with a raw message, the result of a serialziation using the BinaryFormatter):
private void Send(Byte[] rawMessage, QueueClient sender)
{
var numRetries = 0;
var sent = false;
while(!sent)
{
try
{
var brokeredMessage = new BrokeredMessage(rawMessage);
sender.Send(brokeredMessage);
sent = true;
}
// back off when we're being throttled
catch (ServerBusyException)
{
numRetries++;
if (numRetries >= MaxDeliveryCount) throw;
Thread.Sleep(TimeSpan.FromSeconds(numRetries * DefaultBackoffTimeInSeconds));
}
}
}
private static byte[] SerializeMessage(TransportMessage message)
{
if (message.Headers == null)
message.Headers = new Dictionary<string, string>();
if (!message.Headers.ContainsKey(Idforcorrelation))
message.Headers.Add(Idforcorrelation, null);
if (String.IsNullOrEmpty(message.Headers[Idforcorrelation]))
message.Headers[Idforcorrelation] = message.IdForCorrelation;
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, message);
return stream.ToArray();
}
}
If you want NServiceBus to correctly deserialize the message, make sure your thierd parties serialize it correctly.
I now had exactly the same problem and spent several hours to figure out how to solve it. Basically Azure timeout persister is only supported for Azure hosted endpoints that use NServiceBus.Hosting.Azure. If you use NServiceBus.Host process to host your endpoints, it uses NServiceBus.Timeout.Hosting.Windows namespace classes. It initialized a TransactionalTransport with MSMQ and there you get this message.
I used two methods to avoid it:
If you must use As_Server endpoint configuration, you can use .DisableTimeoutManager() in your initialization, it will skip the TimeoutDispatcher initialization completely
Use As_Client endpoint configuration, it doesn't use transactional mode for the transport and timeout dispatcher is not inialized
There could be a way to inject Azure timeout manager somehow but I have not found it yet and I actually need As_Client thingy, so it works fine for me.
Related
I have a simple ASP.NET Core 3.1 app deployed on an Azure App Service, configured with a .NET Core 3.1 runtime. One of my endpoints are expected to receive a simple JSON payload with a single "data" property, which is a base64 encoded string of a file. It can be quite long, I'm running into the following issue when a the JSON payload is 1.6 MBs.
On my local workstation, when I call my API from Postman, everything's working as expected, my breakpoint in the Controller's action method is reached, the data is populated, all good - it's only when I deploy (via Azure DevOps CICD Pipelines) the app to the Azure App Service. Whenever trying to call the deployed API from Postman, no HTTP response is received, but this: "Error: write EPIPE".
I've tried modifying the web.config to include both a maxRequestLength and maxAllowedContentLength properties:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.web>
<httpRuntime maxRequestLength="204800" ></httpRuntime>
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="419430400" />
</requestFiltering>
</security>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\MyApp.API.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />
</system.webServer>
</location>
</configuration>
In the app's code, I've added to the Startup.cs:
services.Configure<IISServerOptions>(options => {
options.MaxRequestBodySize = int.MaxValue;
});
In the Program.cs, I've added:
.UseKestrel(options => { options.Limits.MaxRequestBodySize = int.MaxValue; })
In the controller, I've tried both of these attributes: [DisableRequestSizeLimit], [RequestSizeLimit(40000000)]
However, nothing's working so far - I'm pretty sure it has to be something configured on the App Service itself, not in my code, as locally everything's working. Yet, nothing so far helped in the web.config
It was related to the fact that in my App Service, I had to allow incoming client certificates, in the Configuration - turns out client certificates and large payloads don't mix well in IIS (apparently for more than a decade now): https://learn.microsoft.com/en-us/archive/blogs/waws/posting-a-large-file-can-fail-if-you-enable-client-certificates
None of the proposed workarounds in the above blog post fixed my issue, so I had to workaround: I've created an Azure Function (still using .NET Core 3.1 as a runtime stack) with a Consumption Plan, which is able to receive both the large payload and the incoming client certificate (I guess it doesn't use IIS under the hood?).
In my original backend, I added the original API's route to the App Service's "Certificate exclusion paths", to not get stuck waiting and timing out eventually with "Error: write EPIPE".
I've used Managed Identity to authenticate between my App Service and the new Azure Function (through a System Assigned identity in the Function).
The Azure Function takes the received certificate, and adds it to a new "certificate" property in the JSON body, next to the original "data" property, so my custom SSL validation can stay on the App Service, but the certificate is not being taken from the X-ARR-ClientCert header, but from the received payload's "certificate" property.
The Function:
#r "Newtonsoft.Json"
using System.Net;
using System.IO;
using System.Net.Http;
using System.Text;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using System.Security.Cryptography.X509Certificates;
private static HttpClient httpClient = new HttpClient();
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
var requestBody = string.Empty;
using (var streamReader = new StreamReader(req.Body))
{
requestBody = await streamReader.ReadToEndAsync();
}
dynamic deserializedPayload = JsonConvert.DeserializeObject(requestBody);
var data = deserializedPayload?.data;
var originalUrl = $"https://original-backend.azurewebsites.net/api/inbound";
var certificateString = string.Empty;
StringValues cert;
if (req.Headers.TryGetValue("X-ARR-ClientCert", out cert))
{
certificateString = cert;
}
var newPayload = new {
data = data,
certificate = certificateString
};
var response = await httpClient.PostAsync(
originalUrl,
new StringContent(JsonConvert.SerializeObject(newPayload), Encoding.UTF8, "application/json"));
var responseContent = await response.Content.ReadAsStringAsync();
try
{
response.EnsureSuccessStatusCode();
return new OkObjectResult(new { message = "Forwarded request to the original backend" });
}
catch (Exception e)
{
return new ObjectResult(new { response = responseContent, exception = JsonConvert.SerializeObject(e)})
{
StatusCode = 500
};
}
}
I want to know if it is possible to create an Azure Queue (Service bus or Storage Queue) which can be placed in front of a web application and receives http requests very fist.
updates
Thanks for the comments and answers.
I want to process the request without burding IIS. I need to make it possible to process a request in a queue before it reaches IIS.
if it is possible to create an Azure Queue (Service bus or Storage Queue) which can be placed in front of a web application and receives http requests very fist.
We can save the request message to a queue before handing the request in Azure Web App by adding some code. I wrote a C# version sample code which will record the request message to an Azure Storage queue. Steps below are for your reference.
Step 1. Add a http module to your project. In this module, I registered BeginRequest event of HttpApplication and do the message recording job.
public class RequestToQueueModeule : IHttpModule
{
#region IHttpModule Members
public void Dispose()
{
//clean-up code here.
}
public void Init(HttpApplication context)
{
// Below is an example of how you can handle LogRequest event and provide
// custom logging implementation for it
context.BeginRequest += new EventHandler(OnBeginRequest);
}
#endregion
public void OnBeginRequest(Object source, EventArgs e)
{
HttpApplication context = source as HttpApplication;
AddMessageToQueue(context.Request);
}
public void AddMessageToQueue(HttpRequest request)
{
StringBuilder sb = new StringBuilder();
sb.AppendLine(request.HttpMethod + " " + request.RawUrl + " " + request.ServerVariables["SERVER_PROTOCOL"]);
for (int i = 0; i < request.Headers.Count; i++)
{
sb.AppendLine(request.Headers.Keys[i] + ":" + request.Headers[i]);
}
sb.AppendLine();
if (request.InputStream != null)
{
using (StreamReader sr = new StreamReader(request.InputStream))
{
sb.Append(sr.ReadToEnd());
}
}
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("connection string of your azure storage");
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("queue name which is used to store the request message");
// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();
// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage(sb.ToString());
queue.AddMessage(message);
}
}
Step 2. Register upper module in system.webServer node of web.config. Please modify the namespace name where your module placed.
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<add name="RequestToQueueModeule" type="[your namespace name].RequestToQueueModeule" />
</modules>
</system.webServer>
I want to process the request without burding IIS. I need to make it possible to process a request in a queue before it reaches IIS.
If you want to process the request in a queue before it reaches IIS, you need to add a proxy in front of Azure Web App. Azure Application Gateway works as a proxy and it can be put in front of Web App. If you only want to log the main information of HTT request, you could use Azure Application Gateway and turn on the Access Log. For more information, link below is for your reference.
Diagnostic logs of Application Gateway
If you want to save all the request message, I am afraid you need to build a custom proxy and log the request by yourself.
I have a web service that ingests objects, sends a notification over AMQP, and returns a JSON response to the requester. Each request is performed on a single thread and I am trying to implement publisher confirms and I am struggling on how I should set it up. I have it working but I don't like the way I am doing it.
The way I am doing it is:
Put some headers on the message
Have a publish-subscribe-channel with 2 subscribers
Subscriber 1) creates a blocking queue so it is ready
and sends the message over amqp
Subscriber 2) begins pulling for 5 seconds on that queue until it gets its confirm
The amqp:outbound-channel-adapter sends its publisher confirms to a service activator
The publisherConfirmReceiver receives the confirm and puts it in the blocking queue causing the subscriber 2's pulling to complete and return the result of the confirm.
This technique does work properly but I don't like making the assumption that the chain is going to receive the message before the waitForPublisherConfirm Service Activator from the publish subscribe channel. In this case order matters regarding which component receives the message first.
If the waitForPublisherConfirms service activator receives the message first it will just block the thread for 5 seconds, then allow the chain to send the message via the amqp:outbound-channel-adapter.
I tried putting the waitForPublisherConfirms after the amqp:outbound-channel-adapter but since the outbound-channel-adapter doesn't "return" anything so the service activator never gets called after it in the chain.
I feel like there should be a better way of doing this. My goal is to wait for publisher confirms (or a timeout which I cannot find support for in spring's publisher confirms) before sending a response to the requester.
Could you help me shape the solution a little better or let me know if it is OK to rely on the fact that the first subscriber to the publish-subscribe-channel will always receive a message first.
Sorry this one is so long.
Some configuration
<int:header-enricher input-channel="addHeaders" output-channel="metadataIngestNotifications">
<int:header name="routingKey" ref="routingKeyResolver" method="resolveReoutingKey"/>
<int:header name="notificationId" expression="payload.id" />
</int:header-enricher>
<int:chain input-channel="metadataIngestNotifications" output-channel="nullChannel" >
<int:service-activator id="addPublisherConfirmQueue"
requires-reply="false"
ref="publisherConfirmService"
method="addPublisherConfirmQueue" />
<int:object-to-json-transformer id="transformObjectToJson" />
<int-amqp:outbound-channel-adapter id="amqpOutboundChannelAdapter"
amqp-template="rabbitTemplate"
exchange-name="${productNotificationExchange}"
confirm-ack-channel="publisherConfirms"
confirm-nack-channel="publisherConfirms"
mapped-request-headers="*"
routing-key-expression="headers.routingKey"
confirm-correlation-expression="headers.notificationId" />
</int:chain>
<int:service-activator id="waitForPublisherConfirm"
input-channel="metadataIngestNotifications"
output-channel="publisherConfirmed"
requires-reply="true"
ref="publisherConfirmService"
method="waitForPublisherConfirm" />
<int:service-activator id="publisherConfirmReceiver"
ref="publisherConfirmService"
method="receivePublisherConfirm"
input-channel="publisherConfirms"
output-channel="nullChannel" />
Class
public class PublisherConfirmService {
private final Map<String, BlockingQueue<Boolean>> suspenders = new HashMap<>();
public Message addPublisherConfirmQueue(#Header("notificationId") String id, Message m){
LogManager.getLogger(this.getClass()).info("Adding publisher confirm queue.");
BlockingQueue<Boolean> bq = new LinkedBlockingQueue<>();
suspenders.put(id, bq);
return m;
}
public boolean waitForPublisherConfirm(#Header("notificationId") String id) {
LogManager.getLogger(this.getClass()).info("Waiting for publisher confirms for Notification: " + id);
BlockingQueue<Boolean> bq = suspenders.get(id);
try {
Boolean result = bq.poll(5, TimeUnit.SECONDS);
if(result == null){
LogManager.getLogger(this.getClass()).error("The broker took too long to return a publisher confirm. NotificationId: " + id);
return false;
}else if(!result){
LogManager.getLogger(this.getClass()).error("The publisher confirm indicated that the message was not confirmed. NotificationId: " + id);
return false;
}
} catch (InterruptedException ex) {
LogManager.getLogger(this.getClass()).error("Something went wrong polling for the publisher confirm for notificationId: " + id, ex);
return false;
}finally{
suspenders.remove(id);
}
return true;
}
public void receivePublisherConfirm(String id, #Header(AmqpHeaders.PUBLISH_CONFIRM) boolean confirmed){
LogManager.getLogger(this.getClass()).info("Received publisher confirm for Notification: " + id);
if (suspenders.containsKey(id)){
BlockingQueue<Boolean> bq = suspenders.get(id);
bq.add(confirmed);
}
}
}
How about to take a look to the Aggregator solution for the same purpose?
The <recipient-list-router> to send message to the aggregator's input-channel and the second channel for the <int-amqp:outbound-channel-adapter>.
The confirm-ack-channel must be something which brings the message to the same aggregator after some transformation, e.g. a proper extraction for the correlationKey and so on.
We have an inbound channel adapter that receives notifications of an event. The complexity of the consumer's criteria restrict our ability to use a simple routing key to distribute the messages, so the application uses a splitter to send that message to interested subscriber's queues via a direct exchange.
We want to use publisher confirms on our outbound channel adapter the ensure delivery to the client queues. We want to wait for the publisher confirm to ack the original message, and if a publisher confirm fails to be received or if the ack==false we want to nack the original message that came from the inbound channel adapter.
I assume this will be done in the confirm-callback from the Rabbit Template but I am not sure how to accomplish this. (Or if it is even possible)
<rabbit:connection-factory id="rabbitConnectionFactory"
host="${amqpHost}"
username="${amqpUsername}"
password="${amqpPassword}"
virtual-host="${amqpVirtualHost}"
publisher-confirms="true" />
<rabbit:template id="rabbitTemplate"
connection-factory="rabbitConnectionFactory"
confirm-callback="PublisherConfirms" />
<int-amqp:inbound-channel-adapter channel="notificationsFromRabbit"
queue-names="#{'${productNotificationQueue}' + '${queueSuffix}'}"
connection-factory="rabbitConnectionFactory"
mapped-request-headers="*"
message-converter="productNotificationMessageConverter" />
<int:chain input-channel="notificationsFromRabbit" output-channel="notificationsToClients">
<int:service-activator ref="NotificationRouter"
method="addRecipientsHeaders" />
<int:splitter ref="NotificationRouter"
method="groupMessages" />
<int:object-to-json-transformer />
</int:chain>
<int-amqp:outbound-channel-adapter channel="notificationsToClients"
amqp-template="rabbitTemplate"
exchange-name="${servicesClientsExchange}"
routing-key=""
mapped-request-headers="*" />
At the moment we are acking the messages in the groupMessages method by passing the Channel and Delivery tag as paramters. But, if the broker never sends a return or returns with ack=false then it is too late to nack the message from the inbound channel adapter.
Am I going to need a bean that keeps a Map<Channel, Long> of the channel and delivery tags to access in the confirm-callback or is there some other way?
Is the channel from the inbound channel adapter going to be closed by the time I receive a publisher confirm?
As long as you suspend the consumer thread until all the acks/nacks have been received, you can do what you want.
If you make notificationsFromRabbit a publish-subscribe channel you can add another subscriber (service-activator) where you suspend the thread; wait for all the acks/nacks and take the action you desire.
EDIT:
You can also use Spring Integration to manage the acks for you and it will emit them as messages from the outbound adapter (rather than using a callback yourself).
EDIT2:
You could then use the splitter's sequence size/sequence number headers in your correlation data, enabling the release of the consumer when all the acks are received.
EDIT3:
Something like this should work...
On the outbound adapter, set confirm-correlation-expression="#this" (the whole outbound message).
Class with two methods
private final Map<String, BlockingQueue<Boolean> suspenders;
public void suspend(Message<?> original) {
BlockingQueue<Boolean> bq = new LinkedBlockingQueue();
String key = someKeyFromOriginal(original);
suspenders.put(key, bq);
Boolean result = bq.poll(// some timeout);
try {
if (result == null) {
// timed out
}
else if (!result) {
// throw some exception to nack the message
}
}
finally {
suspenders.remove(key);
}
}
public void ackNack(Message<Message<?>> ackNak) {
Message<?> theOutbound = ackNak.payload;
BlockingQueue<Boolean> bq = suspenders.get(someKeyFromOriginal(theOutbound));
if (bq == null) // late ack/nack; ignore
else {
// check the ack/nack header
// if nack, bq.put(false)
// else, use another map field, to
// keep track of ack count Vs sequenceSize header in
// theOutbound; when all acks received, bq.put(true);
}
}
Suspend the consumer thread in the first method; route the acks/nacks from the outbound adapter to the second method.
Caveat: This is not tested, just off the top of my head; but it should be pretty close.
I have 4 dependent working windows service in one solution and using cassandrasharp 3.1.4 and cassandra 2.0.6.
On the first one, I initialize clusterManager with; (This code is just working on the first service, when i try to configure clusterManager in each of the services, these services doesn't start.)
CassandraSharp.Config.XmlConfigurator.Configure();
and this is my app.config;
<configSections>
<section name="CassandraSharp" type="CassandraSharp.SectionHandler, CassandraSharp.Interfaces" />
</configSections>
<CassandraSharp>
<Cluster name="main">
<Endpoints>
<Server>kml-vm-cas-001.cloudapp.net</Server>
</Endpoints>
</Cluster>
</CassandraSharp>
and OnStop of each of services;
ClusterManager.Shutdown();
Process is simple, each of these services read string from different live stream, deserialize and push to cassandra.
string query = null;
ICqlCommand pocoCommand = null;
Task task = null;
using (ICluster iCluster = ClusterManager.GetCluster("main"))
{
query = string.Format("insert into Tvr.Zools (Part, Name, Ticks) values ('zools', '{0}', {1}) using ttl 86400;",
this.zools[i].Name,
dateTime.Ticks,
);
pocoCommand = iCluster.CreatePocoCommand();
task = pocoCommand.Execute(query).AsFuture();
task.Wait();
query = string.Format("insert into Tvr.Temps (Part, Name, Ticks) values ('zools', '{0}', {1}) using ttl 10800;", this.zools[i].Name, dateTime.Ticks);
pocoCommand = iCluster.CreatePocoCommand();
task = pocoCommand.Execute(query).AsFuture();
task.Wait();
}
This works well for small streams but when these services starts to catch huge load of streams i got these exceptions;
System.ArgumentException: Can't find any valid endpoint
and for some of the services;
System.InvalidOperationException: ClusterManager is not initialized
I tried this in each of services but didn't work;
CassandraSharp.Config.XmlConfigurator.Configure();
//push process..
ClusterManager.Shutdown();
Sorry for any missing information, I'll edit if there is any.
Thanks already.
CassandraSharp handles all of the connection management and pooling for you. All you have to do is run CassandraSharp.Config.XmlConfigurator.Configure(); one time per application. CassandraSharp will then open a connection for you as you need it. You should only call ClusterManager.Shutdown(); when the app is shutting down. Here is a snippet from the author of CassandraSharp:
Configure() must be called once and never more. If you want to shut
down all your clusters connections (let say your process is exiting),
then call Shutdown().
Shutdown() is shutting down all your connections and services for the
whole process - it might have bad impact on the rest of your
application if you are connecting to several clusters in the same
process.
Even if you are connecting to multiple clusters you should define them in the XML config and still only call Configure() one time. So here is an example setup.
Example XML Config
<CassandraSharp>
<Cluster name="Dev">
<Transport cl="ONE" cqlver="3.1.1" />
<Endpoints>
<Server>server1.test.com</Server>
</Endpoints>
</Cluster>
<Cluster name="Prod">
<Transport cl="ONE" cqlver="3.1.1" />
<Endpoints>
<Server>server1.test.com</Server>
<Server>server2.test.com</Server>
</Endpoints>
</Cluster>
</CassandraSharp>
Sample Code
Notice the static cluster. That will only exist one time in the application so you wont run into issues creating multiple instances.
public interface IClusterFactory
{
ICluster GetCluster();
}
public class ClusterFactory : IClusterFactory
{
private static ICluster cluster;
public ICluster GetCluster()
{
if (cluster != null)
{
return cluster;
}
else
{
XmlConfigurator.Configure();
cluster = ClusterManager.GetCluster("Prod");
return cluster;
}
}
}
Then whenever you want to get a connection to Cassandra:
IClusterFactory factory = new ClusterFactory();
ICluster cluster = factory.GetCluster();
IPropertyBagCommand cmd = cluster.CreatePropertyBagCommand();
Don't worry about disposing or closing the cluster as that get's handled in CassandraSharp. Just do a ClusterManager.Shutdown() when the application shuts down.