Hexagonal Architecture for a real-time stock watcher - domain-driven-design

I'm designing a stock market watcher system.
It accepts registration of patterns from subscribers. Meanwhile it polls latest market info every few seconds, the it supports multiple market, so the polling interval, working hours are depend on different market configuration. It may also dynamically adapt polling rates based on market information or subscriptions.
If the market info matches some pattern, it logs the information, and sends alert to subscribers.
The subscribers, patterns, and logs are undoubtedly domain models.
The market info source, alert fanout are out adapters.
Then where should the polling engine be?
Some approaches came to me:
Be a Domain Service: Domain service manage threads to poll market & match patterns.
Be an Application Service: Threads are implementation details, so application manage threads for polling, there are also two approaches:
2a. Application do the most logic, it queries market info, invoke pattern.match(), create logs and sends alerts
2b. Application just invokes a method like GetInfoAndMatch() in the Domain, Domain handles details like 2a did.
I'm struggling which one is more make sense, what's your opinion?

The polling engine triggers a controller. Just like a user would trigger an update manually. The controller then invokes the use case (or primary port in hexagonal architecture) and passes the result to presenter. The presenter updates the ui models and the views show the new values.
In an rich client application this is non big deal since the controller can directly access the ui models.
In a web application the ui controller is on the client side and the backend controller at the backend side (see this answer for details).
Here the backend controller gets either triggered by the "polling engine" or by the client.

Related

Using infraestructure in use cases

I have been reading the book Patterns, principles and practices of domain driven design
, specifically the chapter dedicated to repositories, and in one of the code examples it uses infrastructure interfaces in the use cases, is it correct that the application layer has knowledge of infrastructure?, I thought that use cases should only have knowledge of the domain ...
Using interface to seperate from implementation is the right way, so use cases layer knows interfaces not infrastructure detail.
It is the Application Layer's responsibility to invoke the (injected) infrastructure services, call the domain layer methods, and persist/load necessary data for the business logic to be executed. The domain layer is unconcerned about how data is persisted or loaded, yes, but the application layer makes it possible to use the business logic defined in the domain layer.
You would probably have three layers that operate on any request: A Controller that accepts the request and knows which application layer method to invoke, an Application Server that knows what data to load and which domain layer method to invoke, and the Domain Entity (usually an Aggregate) that encloses the business logic (a.k.a Invariants).
The Controller's responsibility is only to gather the request params (gather user input in your case), ensure authentication (if needed), and then make the call to the Application Service method.
Application Services are direct clients of the domain model and act as intermediaries to coordinate between the external world and the domain layer. They are responsible for handling infrastructure concerns like ID Generation, Transaction Management, Encryption, etc.
Let's take the example of an imaginary MessageSender Application Service. Here is an example control flow:
API sends the request with conversation_id, user_id (author), and message.
Application Service loads Conversation from the database. If the Conversation ID is valid, and the author can participate in this conversation (these are invariants), you invoke a send method on the Conversation object.
The Conversation object adds the message to its own data, runs its business logic, and decides which users to send it to.
The Conversation object raises events to be dispatched into a message interface (collected in a temporary variable valid for that session) and returns. These events contain the entire data to reconstruct details of the message (timestamps, audit log, etc.) and don't just cater to what is pushed out to the receiver later.
The Application Service persists the updated Conversation object and dispatches all events raised during the recent processing.
A subscriber listening for the event gathers it, constructs the message in the right format (picking only the data it needs from the event), and performs the actual push to the receiver.
So you see the interplay between Application Services and Domain Objects is what makes it possible to use the Domain in the first place. With this structure, you also have a good implementation of the Open-Closed Principle.
Your Conversation object changes only if you are changing business logic (like who should receive the message).
Your Application service will seldom change because it simply loads and persists Conversation objects and publishes any raised events to the message broker.
Your Subscriber logic changes only if you are pushing additional data to the receiver.

Duplicate Requests on Service Fabric When Remoting

I have a Stateless service on Service Fabric (ASP.NET Core) which will call an Actor and the Actor may internally also call other Actors and/or Stateful Services depending on the scenarios.
My question is, do we need to account for duplicate requests due to the remoting aspect of the system?
In our earlier Akka.Net implementations there was a chance that the Actor received duplicate requests due to TCP/IP network congestion etc, and we handled that by giving each message a unique Correlation Id. We would store the request and its outcome in state on the actors and if the same correlation id came back again, we would just assume it was a duplicate and sent the earlier outcome instead of re-processing the request.
I had seen a similar approach used in one of the sample projects Microsoft had but I can't seem to find that anymore (dead link on Github).
Does anyone know if this needs to be handled in Actor and or Stateful services?
You could add custom headers in your remoting calls, by creating a custom implementations of IServiceRemotingClientFactory and IServiceRemotingClient.
Add custom headers inside the operations RequestResponseAsync and SendOneWay.
Another example by Peter Bons here.:
var header = requestRequestMessage.GetHeader();
var customHeaders = customHeadersProvider.Invoke() ?? new CustomHeaders();
header.AddHeader(CustomHeaders.CustomHeader, customHeaders.Serialize());
On the receiving side, you can get the custom header from IServiceRemotingRequestMessageHeader in a custom ActorServiceRemotingDispatcher.

Application Insight correlating requests across services and queues

I understand that I could use the clienttrackid and setting this in a header, but I'm unsure of what is handle by application insights / azure and what I need to to manually. This is the case (I would like to see logs from ServiceA, FunctionA, ServiceB as related events) :
Clientapp calls ServiceA
ServicesA adds a message to a queue
FunctionA triggers by the queue, and calls ServiceB
Do I need to add the tracking id to the message I add to the queue? Or is everything handled automagically?
Thanks
Larsi
There is an Application Insights pattern for correlation - see this link
However, often a business transaction spans the scope of many services and technologies and it is useful to be able to correlate across these. Define correlation ID's at the business transaction level and then flow this correlation ID across the entire solution, some of the solution may include Application Insights, data stores and other logging and diagnostics. Unfortunately this is a manual process and takes some thinking through but the benefits in tracking and debugging quickly outweigh the additional time spent on this "plumbing".

Service Fabric actors that receive events from other actors

I'm trying to model a news post that contains information about the user that posted it. I believe the best way is to send user summary information along with the message to create a news post, but I'm a little confused how to update that summary information if the underlying user information changes. Right now I have the following NewsPostActor and UserActor
public interface INewsPostActor : IActor
{
Task SetInfoAndCommitAsync(NewsPostSummary summary, UserSummary postedBy);
Task AddCommentAsync(string content, UserSummary, postedBy);
}
public interface IUserActor : IActor, IActorEventPublisher<IUserActorEvents>
{
Task UpdateAsync(UserSummary summary);
}
public interface IUserActorEvents : IActorEvents
{
void UserInfoChanged();
}
Where I'm getting stuck is how to have the INewsPostActor implementation subscribe to events published by IUserActor. I've seen the SubscribeAsync method in the sample code at https://github.com/Azure/servicefabric-samples/blob/master/samples/Actors/VS2015/VoiceMailBoxAdvanced/VoicemailBoxAdvanced.Client/Program.cs#L45 but is it appropriate to use this inside the NewsPostActor implementation? Will that keep an actor alive for any reason?
Additionally, I have the ability to add comments to news posts, so should the NewsPostActor also keep a subscription to each IUserActor for each unique user who comments?
Events may not be what you want to be using for this. From the documentation on events (https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-events/)
Actor events provide a way to send best effort notifications from the
Actor to the clients. Actor events are designed for Actor-Client
communication and should NOT be used for Actor-to-Actor communication.
Worth considering notifying the relevant actors directly or have an actor/service that will manage this communication.
Service Fabric Actors do not yet support a Publish/Subscribe architecture. (see Azure Feedback topic for current status.)
As already answered by charisk, Actor-Events are also not the way to go because they do not have any delivery guarantees.
This means, the UserActor has to initiate a request when a name changes. I can think of multiple options:
From within IUserAccount.ChangeNameAsync() you can send requests directly to all NewsPostActors (assuming the UserAccount holds a list of his posts). However, this would introduce additional latency since the client has to wait until all posts have been updated.
You can send the requests asynchronously. An easy way to do this would be to set a "NameChanged"-property on your Actor state to true within ChangeNameAsync() and have a Timer that regularly checks this property. If it is true, it sends requests to all NewsPostActors and sets the property to false afterwards. This would be an improvement to the previous version, however it still implies a very strong connection between UserAccounts and NewsPosts.
A more scalable solution would be to introduce the "Message Router"-pattern. You can read more about this pattern in Vaughn Vernon's excellent book "Reactive Messaging Patterns with the Actor Model". This way you can basically setup your own Pub/Sub model by sending a "NameChanged"-Message to your Router. NewsPostActors can - depending on your scalability needs - subscribe to that message either directly or through some indirection (maybe a NewsPostCoordinator). And also depending on your scalability needs, the router can forward the messages either directly or asynchronously (by storing it in a queue first).

What happens if my Node.js server crashes while waiting for web services callback?

Im just starting to look into Node.js to create a web application that asynchrounously calls multiple web services to complete a single client request. I think in SOA speak this is known as a composite service / transaction.
My Node.js application will be responsible for completing any compensating actions should any web service calls fail within the composite service. For example, if service A and B return 'success', but service C returns 'fail', Node.js may need to apply a compensating action (undo effectively) on service A and B.
My question is, what if my Node.js server crashes? I could be in the middle of a composite transaction. Multiple calls to web services have been made, and I am waiting for the callbacks. If my node server crashes, responses meant for the callbacks will go unheard. It could then be possible that one of the web services was not successful, and that some compensating actions on other services would be needed.
Im not sure how I would be able to address this once my node server is back online. This could potentially put the system in an inconsistent state if service A and B succeeded, but C didn't.
Distributed transactions are bad for SOA - they introduce dependency,rigidity , security and performance problems. You can implement a Saga instead which means that each of your services will need to be aware of the on-going operation and take compensating actions if they find out there was a problem. You'd want to save state for each of the services so that they'd know on recovery to get to a consistent internal state.
If you find you must have distributed transactions than you should probably rethink the boundaries between your services.
(updates from the comments)
Even if you use a Saga, you may find that you want some coordinator to control the compensation - but if your services are autonomous they won't need that central coordinator -they'd perform the compensating action themselves - for example if they use the reservation pattern infoq.com/news/2009/09/reservations . They can perform compensation on expiration of the reservation. Otherwise, you can persist the state somewhere (redis/db/zookeeper etc.) and then check that on recovery of the coordinator

Resources