Pub/Sub using RabbitMQ - servicestack

I'm trying to figure out how to implement pub/sub using ServiceStack MQ abstraction
Let's say I have a publisher app publishing a Hello request that will have n subscribers (different apps)
// Publisher
namespace Publisher
{
public class RabbitPublisherAppHost : AppHostHttpListenerBase
{
public RabbitPublisherAppHost() : base("Rabbit Publisher Server", typeof(MainClass).Assembly) { }
public override void Configure(Container container)
{
Routes
.Add<Publish>("/publish/{Text}");
container.Register<IMessageService>(c => new RabbitMqServer());
var mqServer = container.Resolve<IMessageService>();
mqServer.Start();
}
}
}
namespace Publisher
{
public class PublishService : Service
{
public IMessageService MessageService { get; set; }
public void Any(Publish request)
{
PublishMessage(new Hello{Id=request.Text});
}
}
}
than I create the first subscriber
// Sub1
public class RabbitSubscriberAppHost : AppHostHttpListenerBase
{
public RabbitSubscriberAppHost() : base("Rabbit Subscriber 1", typeof(MainClass).Assembly) { }
public override void Configure(Container container)
{
container.Register<RabbitMqServer>(c => new RabbitMqServer());
var mqServer = container.Resolve<RabbitMqServer>();
mqServer.RegisterHandler<Hello>(ServiceController.ExecuteMessage, noOfThreads: 3);
mqServer.Start();
}
}
namespace Subscriber1
{
public class HelloService : Service
{
public object Any(Hello req)
{
//..
}
}
}
Now If I create a similar app acting as the second subscriber, the 2 subscribers are sharing the same queue, so instead of the pub/sub a race condition is happening.
In other word, what has to be done to implement a registration for each subscribers? I'd like all subscribers recevive the Hello request published, not just one of them according to a race condition.

ServiceStack's Messaging API follows the Services Request / Reply pattern (i.e. essentially using MQ instead of HTTP transport) and doesn't support Pub/Sub itself.
For RabbitMQ Pub/Sub you would have to implement it outside of the IMessaging API abstraction, e.g. publishing it to a Rabbit MQ topic from within your Service implementation.
Related ServiceStack does include a Pub/Sub library using Redis. Also depending on your use-case you may be able to make use of notifying multiple subscribers with Server Events .

Related

Azure Service Bus - Subscribe multiple topics inside the same worker/hosted service

we have a scenario where we must integrate requests with the same destination system, which exposes its operations with REST APIs (provided by a third party, most likely not Azure). So this is a scenario where n messages are mapped in n actions on the same destination system. There is no multicast or broadcast.
So we are considering Service Bus to achieve this, based on previous experiences on other use cases, and taking advantage of dead letter mechanism among other things.
We need to integrate 6 or 7 different actions with the 3rd party. So on Service Bus we can achieve this by creating 1 topic per action, and this is important because the data that travels on the message is different from action to action.
But we are facing a situation when consuming topics. We are able to have an hosted service in Azure (App Service) that listens on a specific topic and does its stuff.
But since we are trying to listen on several topics, we would like to avoid writing and deploying multiple app services, we would like (if possible) to have a single app service where we 'trigger' each ServiceBusProcessor (one per topic) and even though they all rely on the limits of the app service itself, each processor is independent and is listening on its topic in parallel and processing.
I'll share a code sample below of our hosted service, but we found out two options, we would like to have opinions:
Option 1: we send all messages to the same topic, then by using filters we determine which is the appropriate action. This would make code simple, but it would put all messages on the same 'line' which would make the topic an all purpose topic, which seems wrong
Option 2: based on our sample below, which represents a single hosted service which listens on a single topic, we would break it and inject a List of listeners that implement the same interface, and each one of them would be working independently on its topic and its message. We are not sure if this is feasible and if it works properly, because the app service would have to handle multiple ServiceBusProcessors side by side.
We'd like to know if we are missing some option, or if there is any other better way to achieve this. Hope I've explained it well.
I send below a sample of our hosted service. Thanks a lot.
public class MyService : IHostedService, IMyService
{
private ILogger<MyService> _logger;
public MyService(ILogger<MyService> logger)
{
_logger = logger;
}
public Task StartAsync(CancellationToken cancellationToken)
{
ServiceBusClient client = new ServiceBusClient("connectionString");
ServiceBusProcessor processor = client.CreateProcessor("topicName", "subscriptionName");
processor.ProcessMessageAsync += ProcessMessageAsync;
processor.ProcessErrorAsync += ProcessErrorAsync;
_logger.LogInformation("Listener initialized");
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
public async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
var body = args.Message.Body;
// Do stuff with this body...
await args.CompleteMessageAsync(args.Message);
}
public Task ProcessErrorAsync(ProcessErrorEventArgs args)
{
_logger.LogError($"Error ocurred: {args.Exception.ToString()} with message: {args.Exception.Message}");
return Task.CompletedTask;
}
}
Then at ConfigureServices:
services.AddHostedService<MyService>();
So, following option 2, the sample above would be transformed in the following, considering 2 listeners:
public interface IMyService
{
}
public interface IMyListener
{
Task Initialize();
Task ProcessMessageAsync(ProcessMessageEventArgs args);
Task ProcessErrorAsync(ProcessErrorEventArgs args);
}
public class BaseListener
{
private string _connectionString;
private string _topicName;
private string _subscriptionName;
private ILogger<BaseListener> _logger;
public BaseListener(ILogger<BaseListener> logger, string connectionString, string topicName, string subscriptionName)
{
this._connectionString = connectionString;
this._topicName = topicName;
this._subscriptionName = subscriptionName;
this._logger = logger;
}
public Task Initialize()
{
ServiceBusClient client = new ServiceBusClient(this._connectionString);
ServiceBusProcessor processor = client.CreateProcessor(this._topicName, this._subscriptionName);
processor.ProcessMessageAsync += ProcessMessageAsync;
processor.ProcessErrorAsync += ProcessErrorAsync;
_logger.LogInformation("Listener initialized");
return Task.CompletedTask;
}
public async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
var body = args.Message.Body;
// Do stuff with this body...
await args.CompleteMessageAsync(args.Message);
}
public Task ProcessErrorAsync(ProcessErrorEventArgs args)
{
return Task.CompletedTask;
}
}
public class MyListener1: BaseListener, IMyListener
{
public MyListener1(ILogger<MyListener1> logger) : base(logger, "connectionString", "topic1", "subscription")
{
}
}
public class MyListener2 : BaseListener, IMyListener
{
public MyListener2(ILogger<MyListener2> logger) : base(logger, "connectionString", "topic2", "subscription")
{
}
}
public class MyService : IHostedService, IMyService
{
private ILogger<MyService> _logger;
private IEnumerable<IMyListener> _listeners;
public MyService(ILogger<MyService> logger, IEnumerable<IMyListener> listeners)
{
_logger = logger;
_listeners = listeners;
}
public Task StartAsync(CancellationToken cancellationToken)
{
foreach(var listener in this._listeners)
{
listener.Initialize();
}
_logger.LogInformation("Listeners initialized");
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
}
And on ConfigureServices:
services.AddHostedService<MyService>();
services.AddSingleton<IMyListener, MyListener1>();
services.AddSingleton<IMyListener, MyListener2>();

No publisher available to publish TcpConnectionOpenEvent / TcpConnectionCloseEvent

I configured a TCP Client with the Java DSL of Spring Integration. It looks like this
#Bean
public TcpSendingMessageHandler tcpClient()
{
return Tcp
.outboundAdapter(
Tcp.nioClient("localhost", 9060)
.deserializer(new ByteArrayLfSerializer())
.soKeepAlive(false)
.leaveOpen(false)
.taskExecutor(Executors.newSingleThreadExecutor())
.get()
)
.clientMode(false)
.get();
}
And I am using it in a Service to send messages to the TCP socket the client is connected to:
#Slf4j
#Service
public class TcpClientConnectionService
{
private final TcpSendingMessageHandler messageHandler;
#Autowired
public TcpClientConnectionService(final TcpSendingMessageHandler messageHandler)
{
this.messageHandler = messageHandler;
this.messageHandler.start();
}
public void sendMessage(final String message)
{
messageHandler.handleMessage(new GenericMessage<>(message));
log.debug("Message: " + message + " send");
}
}
But in production I am getting the follwing warning rather regulary and I do not know what the issue is and how to fix it.
o.s.i.i.tcp.connection.TcpNioConnection : No publisher available to
publish TcpConnectionOpenEvent
o.s.i.i.tcp.connection.TcpNioConnection : No publisher available to
publish TcpConnectionCloseEvent
It would be great if somebody could help me out since I was not able to find anything by googling.
The nested factory is not initialized properly because you are incorrectly calling .get() on the spec, which subverts Spring initialization.
I configured a TCP Client with the Java DSL of Spring Integration. It looks like this
#Bean
public TcpSendingMessageHandler tcpClient()
{
return Tcp
.outboundAdapter(
Tcp.nioClient("localhost", 9060)
.deserializer(new ByteArrayLfSerializer())
.soKeepAlive(false)
.leaveOpen(false)
.taskExecutor(Executors.newSingleThreadExecutor()))
.clientMode(false)
.get();
}
Or move the factory definition to a top level #Bean.

Abstracting Spring Cloud Stream Producer and Consumer code

I have a Service that is Producing and Consuming messages from different Spring Cloud Stream Channels (bound to EventHub/Kafka topics). There are several such Services which are setup similarly.
The configuration looks like below
public interface MessageStreams {
String WORKSPACE = "workspace";
String UPLOADNOTIFICATION = "uploadnotification";
String BLOBNOTIFICATION = "blobnotification";
String INGESTIONSTATUS = "ingestionstatusproducer";
#Input(WORKSPACE)
SubscribableChannel workspaceChannel();
#Output(UPLOADNOTIFICATION)
MessageChannel uploadNotificationChannel();
#Input(BLOBNOTIFICATION)
SubscribableChannel blobNotificationChannel();
#Output(INGESTIONSTATUS)
MessageChannel ingestionStatusChannel();
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
The Producer/Publisher code looks like below
#Service
#Slf4j
public class IngestionStatusEventPublisher {
private final MessageStreams messageStreams;
public IngestionStatusEventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public void sendIngestionStatusEvent() {
log.info("Sending ingestion status event");
System.out.println("Sending ingestion status event");
MessageChannel messageChannel = messageStreams.ingestionStatusChannel();
boolean messageSent = messageChannel.send(MessageBuilder
.withPayload(IngestionStatusMessage.builder()
.correlationId("some-correlation-id")
.status("done")
.source("some-source")
.eventTime(OffsetDateTime.now())
.build())
.setHeader("tenant-id", "some-tenant")
.build());
log.info("Ingestion status event sent successfully {}", messageSent);
}
}
Similarly I have multiple other Publishers which publish to different Event Hubs/Topics. Notice that there is a tenant-id header being set for each published message. This is something specific to my multi-tenant application to track the tenant context. Also notice that I am getting the channel to be published to while sending the message.
My Consumer code looks like below
#Component
#Slf4j
public class IngestionStatusEventHandler {
private AtomicInteger eventCount = new AtomicInteger();
#StreamListener(TestMessageStreams.INGESTIONSTATUS)
public void handleEvent(#Payload IngestionStatusMessage message, #Header(name = "tenant-id") String tenantId) throws Exception {
log.info("New ingestion status event received: {} in Consumer: {}", message, Thread.currentThread().getName());
// set the tenant context as thread local from the header.
}
Again I have several such consumers and also there is a tenant context that is set in each consumer based on the incoming tenant-id header that is sent by the Publisher.
My questions is
How do I get rid of the boiler plate code of setting the tenant-id header in Publisher and setting the tenant context in the Consumer by abstracting it into a library which could be included in all the different Services that I have.
Also, is there a way of dynamically identifying the Channel based on the Type of the Message being published. for ex IngestionStatusMessage.class in the given scenario
To set and tenant-id header in the common code and to avoid its copy/pasting in every microservice you can use a ChannelInterceptor and make it as global one with a #GlobalChannelInterceptor and its patterns option.
See more info in Spring Integration: https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/core.html#channel-interceptors
https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/overview.html#configuration-enable-integration
You can't make a channel selection by the payload type because the payload type is really determined from the #StreamListener method signature.
You can try to have a general #Router with a Message<?> expectation and then return a particular channel name to route according that request message context.
See https://docs.spring.io/spring-integration/docs/5.3.0.BUILD-SNAPSHOT/reference/html/message-routing.html#messaging-routing-chapter

How can microservice can talk to other microservice in JHipster

I am planning to create a microservice aplication with a dedicated service for dealing with data (mostly a Mongodb based service). I am wondering if there is a way using which my other microservices will be able to communicate with this service to make use of the shared data. Is it possible with JHipster API Gateway ?
If not how can I achieve this. I dont want to keep multiple copies of the same data within each microservice.
You can also use Feign clients with JHipster.
Annotate your SpringBootApplication with #EnableFeignClients
...
import org.springframework.cloud.openfeign.EnableFeignClients;
...
#SpringBootApplication
#EnableConfigurationProperties({LiquibaseProperties.class, ApplicationProperties.class})
#EnableDiscoveryClient
#EnableFeignClients
public class MyApp {
...
}
Create a Feign client in your microservice
...
import org.springframework.cloud.openfeign.FeignClient;
...
#FeignClient("another-service")
public interface AnotherClient {
#RequestMapping(method = RequestMethod.GET, value = "/api/another")
List<AnotherDTO> getAll();
}
Inject the Feign client with #Autowired and call it. It should be ready to use.
#RestController
#RequestMapping("/api")
public class MyResource {
...
#Autowired
private AnotherClient anotherClient;
...
#GetMapping("/another")
#Timed
public List<AnotherDTO> getAll() {
log.debug("REST request to get all");
return anotherClient.getAll();
}
}
For us, it worked without implementing a ClientHttpRequestInterceptor and setting a JWT token.
You can register your microservices to the same registry and then they can call each other.
UPDATE : Here is how I made it work.
In the microservice consuming the data one, use RestTemplate with the current user's jwt-token in the Authorization-header for the API calls :
#Component
public class AuthenticateClientHttpRequestInterceptor implements ClientHttpRequestInterceptor {
#Override
public ClientHttpResponse intercept(HttpRequest httpRequest, byte[] bytes, ClientHttpRequestExecution clientHttpRequestExecution) throws IOException {
String token = SecurityUtils.getCurrentUserJWT();
httpRequest.getHeaders().add("Authorization","Bearer "+token);
return clientHttpRequestExecution.execute( httpRequest, bytes );
}
}
My custom restTemplate using ClientHttpRequestInterceptor for adding token in header.
#Configuration
public class CustomBean {
#Autowired
AuthenticateClientHttpRequestInterceptor interceptor;
#Bean
#LoadBalanced
public RestTemplate restTemplate() {
RestTemplate restTemplate = new RestTemplate();
restTemplate.setInterceptors(Collections.singletonList(interceptor));
return restTemplate;
}
}
And in the resource controller where your are making the call for data:
#RestController
#RequestMapping("/api")
public class DataResource {
#Autowired
RestTemplate restTemplate;
#PostMapping("/hello")
#Timed
public ResponseEntity<Hello> createHello(#RequestBody Hello Hello) throws URISyntaxException {
//The name your data micro service registrated in the Jhipster Registry
String dataServiceName = "data_micro_service";
URI uri = UriComponentsBuilder.fromUriString("//" + dataServiceName + "/api/datas")
.build()
.toUri();
//call the data microservice apis
List<Data> result = restTemplate.getForObject(uri, Data[].class);
return ResponseEntity.created(new URI("/api/hellos/" + result.getId()))
.headers(HeaderUtil.createEntityCreationAlert(ENTITY_NAME, result.getId().toString()))
.body(result);
}
}
Typically microservices talk to each other. Thats the whole point. With Eureka discovery in place you simply call the microservice by name instead of the FQDN which we normally would use without microservice.
For e.g. your book-service will call the author-service like this
http://author-service/authors
full example here https://spring.io/blog/2015/01/20/microservice-registration-and-discovery-with-spring-cloud-and-netflix-s-eureka
Please don't forget that JHipster is an opinionated framework based off of Spring Cloud so you can find most of this stuff by searching Spring docs.
you can use below solution :
Microservice A (i.e UAA-SERVICE), and Microservice B
Microservice B want to connect microservice A and call services with Feign client.
1)This code for Microservice B
Client proxy :- #AuthorizedFeignClient(name = "UAA-SERVICE")
#AuthorizedFeignClient(name = "UAA-SERVICE")
public interface UaaServiceClient {
#RequestMapping(method = RequestMethod.GET, path = "api/users")
public List<UserDTO> getUserList();
#RequestMapping(method = RequestMethod.PUT, path = "api/user-info")
public String updateUserInfo(#RequestBody UserDTO userDTO);
}
UAA-SERVICE : find this name with running Application Instances with registry.
2) In Microservice B (application.yml)
Increase feign client connection Time Out:
feign:
client:
config:
default:
connectTimeout: 10000
readTimeout: 50000
Increase hystrix Thread time out:-
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 60000
shareSecurityContext: true
3) add #EnableFeignClients in main #SpringBootApplication class.
This solution is working fine for me.

ServiceStack Message Filtering

I have been using the ServiceStack MQ Server/Client to empower a message based architecture in my platform and it has been working flawlessly. I am now trying to do something that I do not believe is supported by the SS Message Producer/Consumer.
Essentially I am firing off messages (events) at a centralized data center and I have ~2000 decentralized nodes all over the US over a non reliable network that need to potentially know about about this event BUT the event needs to be targeted to only one of the ~2000 nodes. I need the flexibility of the arbitrarily named channels with Pub/Sub but the durability of the MQ. I started off with Pub/Sub but the network is too unreliable so I have moved the solution to use the RedisMQServer. I have it working but wanted to make sure I am not missing something in the interface. I am curious if the creators of SS have thought through this use case and if so what the outcome of that discussion was? This does fight the concept of using the POCO's to drive the outcomes/actions of the message consumption. Maybe that is the reason?
Here is my producer
public ExpressLightServiceResponse Get(ExpressLightServiceRequest query)
{
var result = new ExpressLightServiceResponse();
var assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(new AssemblyName("ArbitaryNamespace"), AssemblyBuilderAccess.Run);
var moduleBuilder = assemblyBuilder.DefineDynamicModule("ModuleName");
var typeBuilder = moduleBuilder.DefineType(string.Format("EventA{0}", query.Store), TypeAttributes.Public);
typeBuilder.DefineDefaultConstructor(MethodAttributes.Public);
var newType = typeBuilder.CreateType();
using (var messageProducer = _messageService.CreateMessageProducer())
{
var message = MessageFactory.Create(newType.CreateInstance());
messageProducer.Publish(message);
}
return result;
}
Here is my consumer
public class ServerAppHost : AppHostHttpListenerBase
{
private readonly string _store;
public string StoreQueue => $"EventA{_store}";
public ServerAppHost(string store) : base("Express Light Server", typeof(PubSubServiceStatsService).Assembly)
{
_store = store;
}
public override void Configure(Container container)
{
container.Register<IRedisClientsManager>(new PooledRedisClientManager(ConfigurationManager.ConnectionStrings["Redis"].ConnectionString));
var assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(new AssemblyName("ArbitaryNamespace"), AssemblyBuilderAccess.Run);
var moduleBuilder = assemblyBuilder.DefineDynamicModule("ModuleName");
var typeBuilder = moduleBuilder.DefineType(StoreQueue, TypeAttributes.Public);
typeBuilder.DefineDefaultConstructor(MethodAttributes.Public);
var newType = typeBuilder.CreateType();
var mi = typeof(Temp).GetMethod("Foo");
var fooRef = mi.MakeGenericMethod(newType);
fooRef.Invoke(new Temp(container.Resolve<IRedisClientsManager>()), null);
}
}
public class Temp
{
private readonly IRedisClientsManager _redisClientsManager;
public Temp(IRedisClientsManager redisClientsManager)
{
_redisClientsManager = redisClientsManager;
}
public void Foo<T>()
{
var mqService = new RedisMqServer(_redisClientsManager);
mqService.RegisterHandler<T>(DoWork);
mqService.Start();
}
private object DoWork<T>(IMessage<T> arg)
{
//Do work
return null;
}
}
What this gives me is the flexibility of Pub/Sub with the durability of a Queue. Does anyone see/know of a more "native" way to achieve this?
There should only be 1 MQ Host registered in your AppHost so I'd firstly remove it out of your wrapper class and have it just register the handler, e.g:
public override void Configure(Container container)
{
//...
container.Register<IMessageService>(
c => new RedisMqServer(c.Resolve<IRedisClientsManager>());
var mqServer = container.Resolve<IMessageService>();
fooRef.Invoke(new Temp(mqServer), null);
mqServer.Start();
}
public class Temp
{
private readonly IMessageService mqServer;
public Temp(IMessageService mqServer)
{
this.mqServer = mqServer;
}
public void Foo<T>() => mqService.RegisterHandler<T>(DoWork);
}
But this approach isn't good fit for ServiceStack which encourages the use of code-first Messages which defines the Service Contract that client/servers use to process the messages that are sent and received. So if you want to use ServiceStack for sending custom messages I'd recommend either having a separate class per message otherwise have a generic Type like SendEvent where the message or event type is a property on the class.
Otherwise if you want to continue with custom messages don't use RedisMqServer, you can just use a dedicated MQ like Rabbit MQ or if you prefer use a Redis List directly - which is the data structure that all Redis MQ's use underneath.

Resources