Cassandra cluster manager and endpoint exception - cassandra

I have 4 dependent working windows service in one solution and using cassandrasharp 3.1.4 and cassandra 2.0.6.
On the first one, I initialize clusterManager with; (This code is just working on the first service, when i try to configure clusterManager in each of the services, these services doesn't start.)
CassandraSharp.Config.XmlConfigurator.Configure();
and this is my app.config;
<configSections>
<section name="CassandraSharp" type="CassandraSharp.SectionHandler, CassandraSharp.Interfaces" />
</configSections>
<CassandraSharp>
<Cluster name="main">
<Endpoints>
<Server>kml-vm-cas-001.cloudapp.net</Server>
</Endpoints>
</Cluster>
</CassandraSharp>
and OnStop of each of services;
ClusterManager.Shutdown();
Process is simple, each of these services read string from different live stream, deserialize and push to cassandra.
string query = null;
ICqlCommand pocoCommand = null;
Task task = null;
using (ICluster iCluster = ClusterManager.GetCluster("main"))
{
query = string.Format("insert into Tvr.Zools (Part, Name, Ticks) values ('zools', '{0}', {1}) using ttl 86400;",
this.zools[i].Name,
dateTime.Ticks,
);
pocoCommand = iCluster.CreatePocoCommand();
task = pocoCommand.Execute(query).AsFuture();
task.Wait();
query = string.Format("insert into Tvr.Temps (Part, Name, Ticks) values ('zools', '{0}', {1}) using ttl 10800;", this.zools[i].Name, dateTime.Ticks);
pocoCommand = iCluster.CreatePocoCommand();
task = pocoCommand.Execute(query).AsFuture();
task.Wait();
}
This works well for small streams but when these services starts to catch huge load of streams i got these exceptions;
System.ArgumentException: Can't find any valid endpoint
and for some of the services;
System.InvalidOperationException: ClusterManager is not initialized
I tried this in each of services but didn't work;
CassandraSharp.Config.XmlConfigurator.Configure();
//push process..
ClusterManager.Shutdown();
Sorry for any missing information, I'll edit if there is any.
Thanks already.

CassandraSharp handles all of the connection management and pooling for you. All you have to do is run CassandraSharp.Config.XmlConfigurator.Configure(); one time per application. CassandraSharp will then open a connection for you as you need it. You should only call ClusterManager.Shutdown(); when the app is shutting down. Here is a snippet from the author of CassandraSharp:
Configure() must be called once and never more. If you want to shut
down all your clusters connections (let say your process is exiting),
then call Shutdown().
Shutdown() is shutting down all your connections and services for the
whole process - it might have bad impact on the rest of your
application if you are connecting to several clusters in the same
process.
Even if you are connecting to multiple clusters you should define them in the XML config and still only call Configure() one time. So here is an example setup.
Example XML Config
<CassandraSharp>
<Cluster name="Dev">
<Transport cl="ONE" cqlver="3.1.1" />
<Endpoints>
<Server>server1.test.com</Server>
</Endpoints>
</Cluster>
<Cluster name="Prod">
<Transport cl="ONE" cqlver="3.1.1" />
<Endpoints>
<Server>server1.test.com</Server>
<Server>server2.test.com</Server>
</Endpoints>
</Cluster>
</CassandraSharp>
Sample Code
Notice the static cluster. That will only exist one time in the application so you wont run into issues creating multiple instances.
public interface IClusterFactory
{
ICluster GetCluster();
}
public class ClusterFactory : IClusterFactory
{
private static ICluster cluster;
public ICluster GetCluster()
{
if (cluster != null)
{
return cluster;
}
else
{
XmlConfigurator.Configure();
cluster = ClusterManager.GetCluster("Prod");
return cluster;
}
}
}
Then whenever you want to get a connection to Cassandra:
IClusterFactory factory = new ClusterFactory();
ICluster cluster = factory.GetCluster();
IPropertyBagCommand cmd = cluster.CreatePropertyBagCommand();
Don't worry about disposing or closing the cluster as that get's handled in CassandraSharp. Just do a ClusterManager.Shutdown() when the application shuts down.

Related

How to connect to elasticsearch cluster with multiple nodes installed on Azure ? How do I get elasticsearch endpoint?

I have installed 3 nodes of elastic search from Azure marketplace and any node can act as master node. Now how do I connect to the cluster ? If I had one node, I could simply use its IP with Port(9200) but here I have 3 nodes so how do I get the cluster endpoint ? Thanks
This how I did it and it worked well for me:
public class ElasticsearchConfig {
private Vector<String> hosts;
public void setHosts(String hostString) {
if(hostString == null || hostString.trim().isEmpty()) {
return;
}
String[] hostParts = hostString.split(",");
this.hosts = new Vector<>();
Collections.addAll(this.hosts, hostParts);
}
}
public class ElasticClient {
private final ElasticsearchConfig config;
private RestHighLevelClient client;
public ElasticClient(ElasticsearchConfig config) {
this.config = config;
}
public void start() throws Exception {
HttpHost[] httpHosts = new HttpHost[config.getHosts().size()];
config.getHosts()
.stream()
.map(host -> new HttpHost(host.split(":")[0], Integer.valueOf(host.split(":")[1])))
.collect(Collectors.toList())
.toArray(httpHosts);
client = new RestHighLevelClient(RestClient.builder(httpHosts));
System.out.println("Started ElasticSearch Client");
}
public void stop() throws Exception {
if (client != null) {
client.close();
}
client = null;
}
}
Set the ElasticsearchConfig as below:
ElasticsearchConfig config = new ElasticsearchConfig();
config.setHosts("ip1:port,ip2:port,ip3:port");
If all three nodes are part of same cluster than no need to specify all the nodes and even one single node ip is enough to connect to the cluster.
But above approach has some disadvantage and in small cluster with less workload its fine as in this case only one node which you configure in your elasticsearch client will act as co-ordinating node and can become a hot-spot in your cluster, its better to have all the nodes configured in your client so for every request any one node can act co-ordinating node and if you have huge workload you might also consider to have dedicated co-ordinating nodes for even better performance.
Hope this answer your question and I didn't provide the code snippet as don't know which language and client you are using and in your question I felt code is not issue but you want to understand the concept in detail.
Appreciate everyone's time and all those who replied; Turns out that by default, Azure marketplace copy of self managed ES, sets up only "Internal Load Balancer". I was able to get the cluster end point as soon as I configured "External Load Balancer". All set now.

Handling Acumatica timeout on API Invoke action

I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />

Subscribing to Service Fabric cluster level events

I am trying to create a service that will update an external list of Service Endpoints for applications running in my service fabric cluster. (Basically I need to replicate the Azure Load Balancer in my on premises F5 Load Balancer.)
During last month's Service Fabric Q&A, the team pointed me at RegisterServiceNotificationFilterAsync.
I made a stateless service using this method, and deployed it to my development cluster. I then made a new service by running the ASP.NET Core Stateless service template.
I expected that when I deployed the second service, the break point would hit in my first service, indicating that a service had been added. But no breakpoint was hit.
I have found very little in the way of examples for this kind of thing on the internet, so I am asking here hopping that someone else has done this and can tell me where I went wrong.
Here is the code for my service that is trying to catch the application changes:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
var fabricClient = new FabricClient();
long? filterId = null;
try
{
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:")
};
fabricClient.ServiceManager.ServiceNotificationFilterMatched += ServiceManager_ServiceNotificationFilterMatched;
filterId = await fabricClient.ServiceManager.RegisterServiceNotificationFilterAsync(filterDescription);
long iterations = 0;
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
ServiceEventSource.Current.ServiceMessage(this.Context, "Working-{0}", ++iterations);
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
}
finally
{
if (filterId != null)
await fabricClient.ServiceManager.UnregisterServiceNotificationFilterAsync(filterId.Value);
}
}
private void ServiceManager_ServiceNotificationFilterMatched(object sender, EventArgs e)
{
Debug.WriteLine("Change Occured");
}
If you have any tips on how to get this going, I would love to see them.
You need to set the MatchNamePrefix to true, like this:
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:"),
MatchNamePrefix = true
};
otherwise it will only match specific services. In my application I can catch cluster wide events when this parameter is set to true.

Azure Autoscale Restarts Running Instances

I've been using Autoscale to shift between 2 and 1 instances of a cloud service in a bid to reduce costs. This mostly works except that from time to time (not sure what the pattern seems to be here), the act of scaling up (1->2) causes both instances to recycle, generating a service outage for users.
Assuming nothing fancy is going on in RoleEntry in response to topology changes, why would scaling from 1->2 restart the already running instance?
Additional notes:
It's clear both instances are recycling by looking at the Instances
tab in Management Portal. Outage can also be confirmed by hitting the
public site.
It doesn't happen consistently but I'm not sure what the pattern is. It feels like when the 1-instance configuration has been running for multiple days, attempts to scale up recycle both. But if the 1-instance configuration has only been running for a few hours, you can scale up and down without outages.
The first instance always comes back much faster than the 2nd instance being introduced.
This has always been this way. When you have 1 server running and you go to 2+, the initial server is restarted. In order to have a full SLA, you need to have 2+ servers at all time.
Nariman, see my comment on Brent's post for some information about what is happening. You should be able to resolve this with the following code:
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
string ip = null;
foreach (IPAddress ipaddress in ipEntry.AddressList)
{
if (ipaddress.AddressFamily.ToString() == "InterNetwork")
{
ip = ipaddress.ToString();
}
}
string urlToPing = "http://" + ip;
HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
WebResponse resp = req.GetResponse();
return base.OnStart();
}
}
You should be able to control this behavior. In the roleEntrypoint, there's an event you can trap for, RoleEnvironmentChanging.
A shell of some code to put into your solution will look like...
RoleEnvironment.Changing += RoleEnvironmentChanging;
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}
RoleEnvironment.Changed += RoleEnvironmentChanged;
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}
Then, inside the RoleEnvironmentChanged method, we can detect what the change is and tell Azure if we want to restart or not.
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
e.Cancel = true; // don't recycle the role
}

On premise NServicebus applicaton receiving messages from Azure ServiceBus queue

I am currently struggling to get something up and running on an nServiceBus hosted application. I have an azure ServiceBus queue that a 3rd party is posting messages to and I want my application (which is hosted locally at the moment) to receive these messages.
I have googled for answers on how to configure the endpoint but I have had no luck in a valid config. Has anyone ever done this as I can find examples of how to connect to Azure storage queues but NOT servicebus queue. (I need azure servicebus queues for other reasons)
The config I have is as below
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.UnicastBus()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage();
}
.
Message=Exception when starting endpoint, error has been logged. Reason: Input queue [mytimeoutmanager#sb://[*].servicebus.windows.net/] must be on the same machine as this Source=NServiceBus.Host
.
<configuration>
<configSections>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureServiceBusQueueConfig" type="NServiceBus.Config.AzureServiceBusQueueConfig, NServiceBus.Azure" />
<section name="AzureTimeoutPersisterConfig" type="NServiceBus.Timeout.Hosting.Azure.AzureTimeoutPersisterConfig, NServiceBus.Timeout.Hosting.Azure" />
</configSections>
<AzureServiceBusQueueConfig IssuerName="owner" QueueName="testqueue" IssuerKey="[KEY]" ServiceNamespace="[NS]" />
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<!-- Use the following line to explicitly set the Timeout manager address -->
<UnicastBusConfig TimeoutManagerAddress="MyTimeoutManager" />
<!-- Use the following line to explicity set the Timeout persisters connectionstring -->
<AzureTimeoutPersisterConfig ConnectionString="UseDevelopmentStorage=true" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
</startup>
</configuration>
Try moving UnicastBus() to the end of your call, like this:
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage()
.UnicastBus(); // <- Here
And about those third parties posting messages to the queue. Keep in mind that they need to respect how NServiceBus handles serialization/deserialization. Here is how this is done in NServiceBus (the most important part is that the BrokeredMessage is initialized with a raw message, the result of a serialziation using the BinaryFormatter):
private void Send(Byte[] rawMessage, QueueClient sender)
{
var numRetries = 0;
var sent = false;
while(!sent)
{
try
{
var brokeredMessage = new BrokeredMessage(rawMessage);
sender.Send(brokeredMessage);
sent = true;
}
// back off when we're being throttled
catch (ServerBusyException)
{
numRetries++;
if (numRetries >= MaxDeliveryCount) throw;
Thread.Sleep(TimeSpan.FromSeconds(numRetries * DefaultBackoffTimeInSeconds));
}
}
}
private static byte[] SerializeMessage(TransportMessage message)
{
if (message.Headers == null)
message.Headers = new Dictionary<string, string>();
if (!message.Headers.ContainsKey(Idforcorrelation))
message.Headers.Add(Idforcorrelation, null);
if (String.IsNullOrEmpty(message.Headers[Idforcorrelation]))
message.Headers[Idforcorrelation] = message.IdForCorrelation;
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, message);
return stream.ToArray();
}
}
If you want NServiceBus to correctly deserialize the message, make sure your thierd parties serialize it correctly.
I now had exactly the same problem and spent several hours to figure out how to solve it. Basically Azure timeout persister is only supported for Azure hosted endpoints that use NServiceBus.Hosting.Azure. If you use NServiceBus.Host process to host your endpoints, it uses NServiceBus.Timeout.Hosting.Windows namespace classes. It initialized a TransactionalTransport with MSMQ and there you get this message.
I used two methods to avoid it:
If you must use As_Server endpoint configuration, you can use .DisableTimeoutManager() in your initialization, it will skip the TimeoutDispatcher initialization completely
Use As_Client endpoint configuration, it doesn't use transactional mode for the transport and timeout dispatcher is not inialized
There could be a way to inject Azure timeout manager somehow but I have not found it yet and I actually need As_Client thingy, so it works fine for me.

Resources