What is the difference between Gateway and Service Activator? - spring-integration

What is the difference between Gateway and Service Activator as Message Endpoints (in terms of Enterprise Integration Patterns)?

http://eaipatterns.com/
Typically, a service activator is used to invoke a local service, in such a manner that the service doesn't know it's being invoked from a messaging system.
A gateway s typically an entry or exit point for the messaging system.

The service activator calls a method on an object where the application developer provides the implementation. Spring Integration takes care of calling the method with messages from the input channel and shunting the results off to some output channel. The application-provided code can do some arbitrary work.
For the gateway the application developer provides only an interface, its implementation is provided by Spring.
An appendix to the Spring Integration documentation includes a Cafe example where Barista is a service called through a service activator, and Cafe is a gateway.
The application's main method looks up a Cafe object from the Spring application context and calls placeOrder, on it, passing an Order in as an argument:
public static void main(String[] args) {
AbstractApplicationContext context = null;
if (args.length > 0) {
context = new FileSystemXmlApplicationContext(args);
}
else {
context = new ClassPathXmlApplicationContext(
"cafeDemo.xml", CafeDemo.class);
}
Cafe cafe = (Cafe) context.getBean("cafe");
for (int i = 1; i <= 100; i++) {
Order order = new Order(i);
order.addItem(DrinkType.LATTE, 2, false);
order.addItem(DrinkType.MOCHA, 3, true);
cafe.placeOrder(order);
}
}
The Cafe is an interface that the application does not provide an implementation for. Spring generates an implementation that sends the Orders passed into it down the input channel called "orders".
Further down the pipeline, there are two service-activators that have a reference to the Barista. The Barista is a POJO that has code for creating a Drink like this:
public Drink prepareHotDrink(OrderItem orderItem) {
try {
Thread.sleep(this.hotDrinkDelay);
System.out.println(Thread.currentThread().getName()
+ " prepared hot drink #" + hotDrinkCounter.incrementAndGet()
+ " for order #" + orderItem.getOrder().getNumber()
+ ": " + orderItem);
return new Drink(orderItem.getOrder().getNumber(),
orderItem.getDrinkType(),
orderItem.isIced(), orderItem.getShots());
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
return null;
}
}
The Barista receives drink orders from the service-activator's input channel and has a method called on it that returns a Drink, which gets sent down the service-activator's output channel, "preparedDrinks".

For me the gateway is used for making an abstraction and provide a normalised API for one or more back-end services.
E.g You have 5 providers which are using different ways to interface with you (SOAP, REST, XML/http, whatever), but your client want only one way to get the data (let says json/REST).
The gateway will convert the json request form your client and convert them to the right backend with its own protocol, after it will convert the backend response to json to give the response to your client.
The service activator acts more as a trigger on an incoming message. Let say your activator poll a database for incoming message and then when the condition meet the "activation" it calls the underlying service.
Info for gateway here.
Info for Activator here.

Related

What is it for redisQueueInboundGateway.setReplyChannelName

just want to ask what is the redisQueueInboundGateway.setReplyChannelName for
I got a log B and then a log.
1.My question is in what situation will the log C be printed when I set it to the RedisQueueInboundGateway.
the doc in "https://docs.spring.io/spring-integration/reference/html/redis.html#redis-queue-inbound-gateway" seems incorrect for class name and class explanation such like:
2.1 the 'RedisOutboundChannelAdapter' is named in 'RedisPublishingMessageHandler'.
2.2 the 'RedisQueueOutboundChannelAdapter' is named in 'RedisQueueMessageDrivenEndpoint'.
2.3 the explanation of Redis Queue Outbound Gateway is exactly the copy of Redis Queue Inbound Gateway.
#GetMapping("test")
public void test() {
this.teller.test("testing 1");
#Gateway(requestChannel = "inputA")
void test(String transaction);
#Bean("A")
PublishSubscribeChannel getA() {
return new PublishSubscribeChannel();
}
#Bean("B")
PublishSubscribeChannel getB() {
return new PublishSubscribeChannel();
}
#Bean("C")
PublishSubscribeChannel getC() {
return new PublishSubscribeChannel();
}
#ServiceActivator(inputChannel = "A")
void aTesting(Message message) {
System.out.println("A");
System.out.println(message);
}
#ServiceActivator(inputChannel = "B")
String bTesting(Message message) {
System.out.println("B");
System.out.println(message);
return message.getPayload() + "Basdfasdfasdfadsfasdf";
}
#ServiceActivator(inputChannel = "C")
void cTesting(Message message) {
System.out.println("C");
System.out.println(message);
}
#ServiceActivator(inputChannel = "inputA")
#Bean
RedisQueueOutboundGateway getRedisQueueOutboundGateway(RedisConnectionFactory connectionFactory) {
val redisQueueOutboundGateway = new RedisQueueOutboundGateway(Teller.CHANNEL_CREATE_INVOICE, connectionFactory);
redisQueueOutboundGateway.setReceiveTimeout(5);
redisQueueOutboundGateway.setOutputChannelName("A");
redisQueueOutboundGateway.setSerializer(new GenericJackson2JsonRedisSerializer(new ObjectMapper()));
return redisQueueOutboundGateway;
}
#Bean
RedisQueueInboundGateway getRedisQueueInboundGateway(RedisConnectionFactory connectionFactory) {
val redisQueueInboundGateway = new RedisQueueInboundGateway(Teller.CHANNEL_CREATE_INVOICE, connectionFactory);
redisQueueInboundGateway.setReceiveTimeout(5);
redisQueueInboundGateway.setRequestChannelName("B");
redisQueueInboundGateway.setReplyChannelName("C");
redisQueueInboundGateway.setSerializer(new GenericJackson2JsonRedisSerializer(new ObjectMapper()));
return redisQueueInboundGateway;
}
Your concern is not clear.
2.1
There is a component (pattern name) and there is a class on background covering the logic.
Sometime they are not the same.
So, Redis Outbound Channel Adapter is covered by the RedisPublishingMessageHandler, just because there is a ConsumerEndpointFactoryBean to consume messages from the input channel and RedisPublishingMessageHandler to handle them. In other words the framework creates two beans to make such a Redis interaction working. In fact all the outbound channel adapters (gateways) are handled the same way: endpoint plus handler. Together they are called adapter or gateway depending on the type of the interaction.
2.2
I don't see such a misleading in the docs.
2.3
That's not true.
See difference:
Spring Integration introduced the Redis queue outbound gateway to perform request and reply scenarios. It pushes a conversation UUID to the provided queue,
Spring Integration 4.1 introduced the Redis queue inbound gateway to perform request and reply scenarios. It pops a conversation UUID from the provided queue
All the inbound gateways are supplied with an optional replyChannel to track the replies. It is not where this type of gateways is going to send something. It is fully opposite: the place where this inbound gateway is going to take a reply channel to send a reply message into Redis back. The Inbound gateway is initiated externally. In our case as a request message in the configured Redis list. When you Integration flow does its work, it sends a reply message to this gateway. In most cases it is done automatically using a replyChannel header from the message. But if you would like to track that reply, you add a PublishSubscribeChannel as that replyChannel option on the inbound gateway and both your service activator and the gateway get the same message.
The behavior behind that replyChannel option is explained in the Messaging Gateway chapter: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#gateway-default-reply-channel
You probably right about this section in the docs https://docs.spring.io/spring-integration/reference/html/redis.html#redis-queue-inbound-gateway and those requestChannel and replyChannel are really a copy of the text from the Outbound Gateway section. That has to be fixed. Feel free to raise a GH issue so we won't forget to address it.
The C logs are going to be printed when you send a message into that C channel, but again: if you want to make a reply correlation working for the Redis Inbound Gateway it has to be as a PublishSubscribeChannel. Otherwise just omit it and your String bTesting(Message message) { will send its result to the replyChannel header.

Application Insights telemetry correlation using log4net

I'm looking to have our distributed event logging have proper correlation. For our Web Applications this seems to be automatic. Example of correlated logs from one of our App Services API:
However, for our other (non ASP, non WebApp) services were we use Log4Net and the App Insights appender our logs are not correlated. I tried following instructions here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/correlation
Even after adding unique operation_Id attributes to each operation, we're not seeing log correlation (I also tried "Operation Id"). Example of none correlated log entry:
Any help on how to achieve this using log4net would be appreciated.
Cheers!
Across services correlation IDs are mostly propagated through headers. When AI is enabled for Web Application, it reads IDs/Context from the incoming headers and then updates outgoing headers with the appropriate IDs/Context. Within service, operation is tracked with Activity object and every telemetry emitted will be associated with this Activity, thus sharing necessary correlation IDs.
In case of Service Bus / Event Hub communication, the propagation is also supported in the recent versions (IDs/Context propagate as metadata).
If service is not web-based and AI automated correlation propagation is not working, you may need to manually get incoming ID information from some metadata if any exists, restore/initiate Activity, start AI operation with this Activity. When telemetry item is generated in scope of that Activity, it will get proper IDs and will be part of the overarching trace. With that, if telemetry is generated from Log4net trace that was executed in the scope of AI operation context then that telemetry should get right IDs.
Code sample to access correlation from headers:
public class ApplicationInsightsMiddleware : OwinMiddleware
{
// you may create a new TelemetryConfiguration instance, reuse one you already have
// or fetch the instance created by Application Insights SDK.
private readonly TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
private readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
public ApplicationInsightsMiddleware(OwinMiddleware next) : base(next) {}
public override async Task Invoke(IOwinContext context)
{
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = $"{context.Request.Method} {context.Request.Uri.GetLeftPart(UriPartial.Path)}"
};
// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if (context.Request.Headers.ContainsKey("Request-Id"))
{
var requestId = context.Request.Headers.Get("Request-Id");
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
requestTelemetry.Context.Operation.Id = GetOperationId(requestId);
requestTelemetry.Context.Operation.ParentId = requestId;
}
// StartOperation is a helper method that allows correlation of
// current operations with nested operations/telemetry
// and initializes start time and duration on telemetry items.
var operation = telemetryClient.StartOperation(requestTelemetry);
// Process the request.
try
{
await Next.Invoke(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
telemetryClient.TrackException(e);
throw;
}
finally
{
// Update status code and success as appropriate.
if (context.Response != null)
{
requestTelemetry.ResponseCode = context.Response.StatusCode.ToString();
requestTelemetry.Success = context.Response.StatusCode >= 200 && context.Response.StatusCode <= 299;
}
else
{
requestTelemetry.Success = false;
}
// Now it's time to stop the operation (and track telemetry).
telemetryClient.StopOperation(operation);
}
}
public static string GetOperationId(string id)
{
// Returns the root ID from the '|' to the first '.' if any.
int rootEnd = id.IndexOf('.');
if (rootEnd < 0)
rootEnd = id.Length;
int rootStart = id[0] == '|' ? 1 : 0;
return id.Substring(rootStart, rootEnd - rootStart);
}
}
Code sample for manual correlated operation tracking in isolation:
async Task BackgroundTask()
{
var operation = telemetryClient.StartOperation<DependencyTelemetry>(taskName);
operation.Telemetry.Type = "Background";
try
{
int progress = 0;
while (progress < 100)
{
// Process the task.
telemetryClient.TrackTrace($"done {progress++}%");
}
// Update status code and success as appropriate.
}
catch (Exception e)
{
telemetryClient.TrackException(e);
// Update status code and success as appropriate.
throw;
}
finally
{
telemetryClient.StopOperation(operation);
}
}
Please note that the most recent version of Application Insights SDK is switching to W3C correlation standard, so header names and expected format would be different as per W3C specification.

Send a email when any Errors got in my errorChannel using Spring integration DSL

I am developing an API in spring-integration using DSL, this how it works
JDBC Polling Adapter initiates the flow and gets some data from tables and send it to DefaultRequestChannel, from here the message is handled/flowing thru various channels.
Now I am trying to
1. send a email, if any errors (e.g connectivity issue, bad record found while polling the data) occurred/detected in my error channel.
After sending email to my support group, I want to suspend my flow for 15 mins and then resume automatically.
I tried creating a sendEmailChannel (recipient of my errorChannel), it doesn't work for me. So just created a transformer method like below
this code is running fine, but is it a good practice?
#
#Transformer(inputChannel="errorChannel", outputChannel="suspendChannel")
public Message<?> errorChannelHandler(ErrorMessage errorMessage) throws RuntimeException, MessagingException, InterruptedException {
Exception exception = (Exception) errorMessage.getPayload();
String errorMsg = errorMessage.toString();
String subject = "API issue";
if (exception instanceof RuntimeException) {
errorMsg = "Run time exception";
subject = "Critical Alert";
}
if (exception instanceof JsonParseException) {
errorMsg = ....;
subject = .....;
}
MimeMessage message = sender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(message);
helper.setFrom(senderEmail);
helper.setTo(receiverEmail);
helper.setText(errorMsg);
helper.setSubject(subject);
sender.send(message);
kafkaProducerSwitch.isKafkaDown());
return MessageBuilder.withPayload(exception.getMessage())
.build();
}
I am looking for some better way of handling the above logic.
And also any suggestions to suspend my flow for few mins.
You definitely can use a mail sending channel adapter from Spring Integration box to send those messages from the error channel: https://docs.spring.io/spring-integration/docs/5.1.5.RELEASE/reference/html/#mail-outbound. The Java DSL variant is like this:
.handle(Mail.outboundAdapter("gmail")
.port(smtpServer.getPort())
.credentials("user", "pw")
.protocol("smtp")))
The suspend can be done via CompoundTriggerAdvice extension, when you check the some AtimocBoolean bean for the state to activate one or another trigger in the beforeReceive() implementation. Such a AtimocBoolean can change its state in one more subscriber to that errorChannel because this one is a PublishSubscribeChannel by default. Don't forget to bring the state back to normal after that you return a false from the beforeReceive(). Just because that is enough to mark your system as normal at this moment since it is is going to work only after 15 mins.

Messages not coming thru to Azure SignalR Service

I'm implementing Azure SignalR service in my ASP.NET Core 2.2 app with React front-end. When I send a message, I'm NOT getting any errors but my messages are not reaching the Azure SignalR service.
To be specific, this is a private chat application so when a message reaches the hub, I only need to send it to participants in that particular chat and NOT to all connections.
When I send a message, it hits my hub but I see no indication that the message is making it to the Azure Service.
For security, I use Auth0 JWT Token authentication. In my hub, I correctly see the authorized user claims so I don't think there's any issues with security. As I mentioned, the fact that I'm able to hit the hub tells me that the frontend and security are working fine.
In the Azure portal however, I see no indication of any messages but if I'm reading the data correctly, I do see 2 client connections which is correct in my tests i.e. two open browsers I'm using for testing. Here's a screen shot:
Here's my Startup.cs code:
public void ConfigureServices(IServiceCollection services)
{
// Omitted for brevity
services.AddAuthentication(options => {
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(jwtOptions => {
jwtOptions.Authority = authority;
jwtOptions.Audience = audience;
jwtOptions.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// Check to see if the message is coming into chat
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) &&
(path.StartsWithSegments("/im")))
{
context.Token = accessToken;
}
return System.Threading.Tasks.Task.CompletedTask;
}
};
});
// Add SignalR
services.AddSignalR(hubOptions => {
hubOptions.KeepAliveInterval = TimeSpan.FromSeconds(10);
}).AddAzureSignalR(Configuration["AzureSignalR:ConnectionString"]);
}
And here's the Configure() method:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
// Omitted for brevity
app.UseSignalRQueryStringAuth();
app.UseAzureSignalR(routes =>
{
routes.MapHub<Hubs.IngridMessaging>("/im");
});
}
Here's the method I use to map a user's connectionId to the userName:
public override async Task OnConnectedAsync()
{
// Get connectionId
var connectionId = Context.ConnectionId;
// Get current userId
var userId = Utils.GetUserId(Context.User);
// Add connection
var connections = await _myServices.AddHubConnection(userId, connectionId);
await Groups.AddToGroupAsync(connectionId, "Online Users");
await base.OnConnectedAsync();
}
Here's one of my hub methods. Please note that I'm aware a user may have multiple connections simultaneously. I just simplified the code here to make it easier to digest. My actual code accounts for users having multiple connections:
[Authorize]
public async Task CreateConversation(Conversation conversation)
{
// Get sender
var user = Context.User;
var connectionId = Context.ConnectionId;
// Send message to all participants of this chat
foreach(var person in conversation.Participants)
{
var userConnectionId = Utils.GetUserConnectionId(user.Id);
await Clients.User(userConnectionId.ToString()).SendAsync("new_conversation", conversation.Message);
}
}
Any idea what I'm doing wrong that prevents messages from reaching the Azure SignalR service?
It might be caused by misspelled method, incorrect method signature, incorrect hub name, duplicate method name on the client, or missing JSON parser on the client, as it might fail silently on the server.
Taken from Calling methods between the client and server silently fails
:
Misspelled method, incorrect method signature, or incorrect hub name
If the name or signature of a called method does not exactly match an appropriate method on the client, the call will fail. Verify that the method name called by the server matches the name of the method on the client. Also, SignalR creates the hub proxy using camel-cased methods, as is appropriate in JavaScript, so a method called SendMessage on the server would be called sendMessage in the client proxy. If you use the HubName attribute in your server-side code, verify that the name used matches the name used to create the hub on the client. If you do not use the HubName attribute, verify that the name of the hub in a JavaScript client is camel-cased, such as chatHub instead of ChatHub.
Duplicate method name on client
Verify that you do not have a duplicate method on the client that differs only by case. If your client application has a method called sendMessage, verify that there isn't also a method called SendMessage as well.
Missing JSON parser on the client
SignalR requires a JSON parser to be present to serialize calls between the server and the client. If your client doesn't have a built-in JSON parser (such as Internet Explorer 7), you'll need to include one in your application.
Update
In response to your comments, I would suggest you try one of the Azure SignalR samples, such as
Get Started with SignalR: a Chat Room Example to see if you get the same behavior.
Hope it helps!

Multiple REST calls timing out in Spring Boot web application

I created a Spring Boot (1.4.2) REST application. One of the #RestController methods needs to invoke a 3rd party API REST operation (RestOp1) which returns, say between 100-250 records. For each of those records returned by RestOp1, within the same method, another REST operation of the same 3rd party API (RestOp2) must be invoked. My first attempt involved using a Controller class level ExecutorService based on a Fixed Thread Pool of size 100, and a Callable returning a record corresponding to the response of RestOp2:
// Executor thread pool - declared and initialized at class level
ExecutorService executor = Executors.newFixedThreadPool(100);
// Get records from RestOp1
ResponseEntity<RestOp1ResObj[]> restOp1ResObjList
= this.restTemplate.exchange(url1, HttpMethod.GET, httpEntity, RestOp1ResObj[].class);
RestOp1ResObj[] records = restOp1ResObjList.getBody();
// Instantiate a list of futures (to call RestOp2 for each record)
List<Future<RestOp2ResObj>> futureList = new ArrayList<>();
// Iterate through the array of records and call RestOp2 in a concurrent manner, using Callables.
for (int count=0; count<records.length; count++) {
Future<RestOp2ResObj> future = this.executorService.submit(new Callable<RestOp2ResObj>() {
#Override
public RestOp2ResObj call() throws Exception {
return this.restTemplate.exchange(url2, HttpMethod.GET, httpEntity, RestOp2Obj.class);
}
};
futureList.add(future);
});
// Iterate list of futures and fetch response from RestOp2 for each
// record. Build a final response and send back to the client.
for (int count=0; count<futureList.size(); count++) {
RestOp2ResObj response = futureList.get(count).get();
// use above response to build a final response for all the records.
}
The performance of the above code is abysmal to say the least. The response time for a RestOp1 call (invoked only once) is around 2.5 seconds and that for a RestOp2 call (invoked for each record) is about 1.5 seconds. But the code execution time is between 20-30 seconds, as opposed to an expected range of 5-6 seconds! Am I missing something fundamental here?
Is the service you are calling fast enough to handle that many requests per second?
There is an async version of RestService is available called AsyncRestService. Why are you not using that?
I would probably go like this:
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newFixedThreadPool(100)));
asyncRestTemplate.exchange("http://www.example.com/myurl", HttpMethod.GET, new HttpEntity<>("message"), String.class)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> result) {
//TODO: Add real response handling
System.out.println(result);
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
ex.printStackTrace();
}
});
Your question involves two parts :
multiple API callbacks asynchronously
handle timeouts (fallback)
both parts are related as you've to handle the timeout of each call.
you may consider use Spring Cloud (based on spring boot) and use some out of the box solution based on OSS Netflix stacks.
The first (timeouts) on should be a circuit breaker hystrix based on feign client
The second (multiple requests) this is an architecture issue, using native Executors isn't a good idea as it will not scale and has a huge maintenance costs. You may relay on Spring Asynchrounous Methods you'll have better results and fully spring compliant.
Hope this will help.

Resources