How do I create an XA Compliant topic connection factory in WAS7? - websphere-7

I created a topic connection factory using the WebSphere MQ messaging provider in WAS7. When I lookup the JNDI name, I am given a factory object of type com.ibm.ejs.jms.JMSTopicConnectionFactoryHandle. This implements TopicConnectionFactory, but doesn't implement XATopicConnectionFactory.
In my topic connection factory, I have checked the box that says "Support distributed two phase commit protocol".
How do I create a topic connection factory that implements the XA interface? Do I need to? Does this JMSTopicConnectionFactoryHandle handle XA messages anyway?

XATopicConnectionFactory defines an API that is used between the JMS provider and the application server (more precisely the transaction manager). On the other hand, the topic connection factory returned to the application is a wrapper that never implements that interface. The reason is that applications are not expected to interfere with XA features directly; managing the individual resources in a distributed transaction is the exclusive responsibility of the transaction manager.

Related

Azure Service Bus alternative to transactional outbox pattern

Let's say I have this generic saga scenario (given three distinct microservices A, B, and C, communicating with messaging):
1. Service A
a. Performs operation A successfully
b. Communicates update with message A
2. Service B (after receiving message A)
a. Performs operation B successfully
b. Communicates update with message B
3. Service C (after receiving message B)
a. Fails to perform operation B
b. Communicates failure
4. Service A and B performs compensating actions
It is my understanding that while the entire workflow should be eventually consistent, you want to ensure that the local operations (the a and b) are transactionally consistent to avoid losing messages (or if reverse, avoid sending messages but fail to persist operation changes).
This is the problem that the transactional outbox pattern aims to solve, if I'm not mistaken.
In the context of .NET on Azure, using
EF Core
Azure Service Bus
Is there a way to get the same level of transactional security without saving the message to the database (i.e. not using a transactional outbox)?
I've seen a lot of System.Transactions mentions, but it's either being used for multiple database operations or multiple service bus operations, not database and service bus operations together.
Could something like this achieve the desired transactional consistency?
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// _dbContext.Database.EnlistTransaction(ts); <-- ?
_dbContext.Blogs.Add(new Blog { Url = "http://blogs.msdn.com/dotnet" });
_dbContext.SaveChanges();
await _serviceBusSender.SendMessageAsync(new ServiceBusMessage());
ts.Complete();
}
No, you can't achieve that as different resources cannot participate in a single transaction as that would become a distributed transaction, which is undesired in a cloud environment.
To ensure your data and messaging operations share the same transaction, you'd be looking into some sort of persistence such as outbox.
Frameworks such as NServiceBus provide support for outbox with Azure Service Bus as a transport and SQL Server or document database as a data store.

Difference between BrokeredMessage class in Microsoft.ServiceBus and Message class in Microsoft.Azure.ServiceBus

I've got started with Azure Service Bus in Azure. Having gone through some references over the Internet, it seems that people use BrokeredMessage class in Microsoft.ServiceBus.Messaging rather than Message class in Microsoft.Azure.ServiceBus.
BrokeredMessage class: https://learn.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.brokeredmessage?view=azure-dotnet
Message class: https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.message?view=azure-dotnet
I can send both of message 'types' to Azure Service Bus and also work with them over Azure Service Bus. Also, both can be used in Asynchronous operation. What are the main differences between the two types?
[Update] This article gives best practices of Azure Service Bus when exchaning brokered message (https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements). I'm unsure if it is also referred to Message in Microsoft.Azure.ServiceBus.
If it's a new project to use Azure Service Bus, I would recommend the following:
Prefer the new .NET Standard client (Microsoft.Azure.ServiceBus) with Message.
Be careful with documentation and various resources. They mostly cater to the old client (hopefully MSFT doco soon to change).
If you need transport transactions spanning different entities, the new client can't provide it yet.
If you need management operations, the new client won't provide it. Ever. Instead you'd have to use Management library or wait till a replacement package for NamespaceManager is out.
If you have old systems emitting messages sent as serialized data and not Stream, implementations using new Client have to know about it and use extension method provided by the client to process those messages. New client only deals with Stream based messages.
As Gaurav Mantiri mentioned that Microsoft.Azure.ServiceBus is the newer version of the library built using .Net Standard.
You could get the detail information from github.
This is the next generation Service Bus .NET client library that focuses on queues & topics. This library is built using .NET Standard 1.3.

Is it a good practice to re-create a Topic Client before sending a message to Azure Topic

I am using Microsoft.Azure.ServiceBus, Version=2.0.0.0 assembly to connect to Azure Topics. The code is below
public void SendMessage(Message brokeredMessage)
{
var topicClient = new TopicClient(_configuration.ConnectionString, topicName, _defaultRetryPolicy);
await topicClient.SendAsync(brokeredMessage);
await topicClient.CloseAsync();
}
I was wondering whether it's a good practice to create the Topic Client every time I need to send a message to the topic, or should I create the Topic Client on application startup and keep using the same client every time I need to send a message?
Are there any performance or scalability issues that I need to consider?
From Azure Service Bus Best Practices post:
Reusing factories and clients
Service Bus client objects, such as QueueClient or MessageSender, are
created through a MessagingFactory object, which also provides
internal management of connections. You should not close messaging
factories or queue, topic, and subscription clients after you send a
message, and then re-create them when you send the next message.
Closing a messaging factory deletes the connection to the Service Bus
service, and a new connection is established when recreating the
factory. Establishing a connection is an expensive operation that you
can avoid by re-using the same factory and client objects for multiple
operations. You can safely use the QueueClient object for sending
messages from concurrent asynchronous operations and multiple threads.
Based on this, you should be reusing the Topic Client object.

Sharing Azure Service Fabric Reliable Queues Between Services

I'm diving into Service Fabric (from the Cloud Services world) and am hitting a few speed bumps with how ReliableQueues work.
Let's say I have 2 stateful services StatefulService1 and StatefulService2.
If I need to have StatefulService1 send a message in a queue that StatefulService2 will pick up and read am I able to use ReliableQueues or are ReliableQueues isolated within the service they are created in?
If that is the case then what is the purpose of having ReliableQueues? The usual pattern behind them is for another process to act on the messages. I understand why isolating a Dictionary to a service would make sense, but not a queue...
Is my best option to rely on a traditional approach to send this message such as a Storage Queue or does ServiceFabric offer a solution for passing message queues between services?
UPDATE
Just want to clarify that I did attempt to dequeue a message created in StatefulService1 from within StatefulService2 and it came up empty. Dequeing from within StatefulService1 worked fine as expected.
Reliable Collections are in memory data structures that are not intended for inter-service communications. If you would like to establish a communication channel between StatefulService1 and StatefulService2, you have the following options:
Use Communication Listeners. You can have custom listeners for the protocol of your choice, including HTTP, WCF or your custom protocol. You can read more about it in this section. For example, StatefulService2 can open up an HTTP endpoint that StatefulService1 can POST/GET to.
Use an external queuing system, like Servicebus, EventHub or Kafka, where StatefulService1 can post events to. StatefulService2 can be consumer service that consumes events from the queue and processes it.
Reliable Collections (queue and dictionary) are not intended for communication. With queues, it's a 2PC, so only one process can access it at any point in time. Note that when you use stateful services with partitions, to access the data both service instances have to be on the same partition. Different partitions cannot access the same data.
Relying on either traditional methods or implementing your own communication listener is the way to go. With the traditional way - keep in mind that you'll need to decide if you want to partition your queues just like your services are or not.
I don't see why a service can't host a reliable collection/queue, and other services can access it via one of three transports: Remoting, WCF and HTTP.
Obviously, the reliable service will have to expose the collection/queue via an API or implement an IService interface
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-connect-and-communicate-with-services
You have to add a fault-handling retry pattern to your calling code, see https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-communication, in this case you don't need a queue to hold data for your in between service calls.

Do I need a retry policy to handle transient faults for Service Fabric internal communication?

We use Azure Service Fabric with Reliable Services and Actors, the IoTHub and Web Apis and are currently integrating "Transient Fault Handling" (TFH) to deal with errors during (remote) communication of services.
For Azure Storage and SQL it is already implemented, we use the built-in retry policies for that and it works fine.
But what about the Service Fabric internal communication? There are also services, communicating via a remoting mechanism.
Here are my questions:
Do we need to handle transient faults for the communication between Reliable Services and Reliable Actors in Service Fabric?
If so - how could this be done? Is the Transient Fault Handling Application Block the only way to implement retry policies for the internal communication?
If not - how does Service Fabric handle transient faults?
Additional information I already gathered:
This article about communication between services describes a typical fault-handling retry pattern for the inter-service communication. But instead of ICommunicationClientFactory and ICommunicationClient, we use Service Remoting for that. I could not figure out, how to use this typical fault handling with Service Remoting.
Late answer, but maybe people are still looking for answers... Anyhow, Service Fabric has default transient fault handling (and non transient fault handling as well). Via the OperationRetrySettings you can customize these. You can also customize other properties via the TransportSettings. Here is an example how you can customize these settings:
FabricTransportSettings transportSettings = new FabricTransportSettings
{
OperationTimeout = TimeSpan.FromSeconds(30)
};
var retrySettings = new OperationRetrySettings(TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(1), 5);
var clientFactory = new Microsoft.ServiceFabric.Services.Remoting.FabricTransport.Client.FabricTransportServiceRemotingClientFactory(transportSettings);
var serviceProxyFactory = new Microsoft.ServiceFabric.Services.Remoting.Client.ServiceProxyFactory((c) => clientFactory, retrySettings);
var client = serviceProxyFactory.CreateServiceProxy<IXyzService>(new Uri("fabric:/Xyz/Service"));
return client;
hth //Peter

Resources