How do I cleanly shutdown a distributed ActivePivot setup? - activepivot

We have an ActivePivot cube that is a polymorphic cube (2 nodes) where 1 node is itself a horisontally distributed cube (8 nodes). Running in Tomcat using JGroup TCP for distribution. It is restarted on a daily basis, but every time it is shut down (node services are stopped in sequence), various errors show up in the logs. This is harmless, but anoying from a monitoring perspective.
Example from one day (all same node):
19:04:43.100 ERROR [Pool-LongPollin][streaming] A listener dropped (5f587379-ac67-4645-8554-2e02ed739924). The number of listeners is now 1
19:04:45.767 ERROR [Pool-LongPollin][streaming] Publishing global failure
19:05:16.313 ERROR [localhost-start][core] Failed to stop feed type MDXFEED with id A1C1D8D92CF7D867F09DCB7E65077B18.0.PT0
Example from another day (same error from multiple different nodes):
19:00:17.353 ERROR [pivot-remote-0-][distribution] A safe broadcasting task could not be performed
com.quartetfs.fwk.QuartetRuntimeException: [<node name>] Cannot run a broadcasting task with a STOPPED messenger
Does anyone know of a clean way to shut down a setup like this?

Those errors appear because on application shutdown the ActivePivotManager is aggressively stopping the distribution, without waiting for each distributed ActivePivot to be notified that other cubes have been stopped.
To smoothly stop distribution you can use the methods from the DistributionUtil class. For instance:
public class DistributionStopper {
protected final IActivePivotManager manager;
public DistributionStopper (IActivePivotManager manager){
this.manager = manager;
}
public void stop(){
// Get all the schemas from the manager
final Collection<IActivePivotSchema> schemas = manager.getSchemas().values();
// To store all the available messengers
final List<IDistributedMessenger<?>> availableMessengers = new LinkedList<>();
// Find all the messengers
for(IActivePivotSchema schema : schemas){
for(String pivotId : schema.getPivotIds()){
// Retrieve the activePivot matching this id
final IMultiVersionActivePivot pivot = schema.retrieveActivePivot(pivotId);
if(pivot instanceof IMultiVersionDistributedActivePivot){
IDistributedMessenger<IActivePivotSession> messenger = ((IMultiVersionDistributedActivePivot) pivot).getMessenger();
if(messenger != null){
availableMessengers.add(messenger);
}
}
}
}
// Smoothly stop the messengers
DistributionUtil.stopMessengers(availableMessengers);
}
}
Then register this custom class as a Spring bean depending on the activePivotManager singleton bean, in order to have its destroyMethod called before the one of the manager.
#Bean(destroyMethod="stop")
#DependsOn("activePivotManager")
public DistributionStopper distributionStopper(IActivePivotManager manager){
return new DistributionStopper(manager);
}

Related

Hazelcast Add Local Entry Listener Only Once

I have a 3-node application that includes an embedded Hazelcast instance (so, 3 instances). These are used in the application for session sharing across the nodes. In the configuration, I added a local entry listener to write an audit record to the database.
Unfortunately, this caused something of a race condition because the listener ended up being added to the internal map of listeners 3 times. I'd end up getting a "connection has already been closed". I ended up switching to an interceptor which did NOT seem to get added multiple times.
I don't consider the interceptor method to be a great solution. However, the addLocalEntryListener() method returns a random UUID. So, it's not a matter of determining if the key exists in the map.
Ideally, I'd like something like this:
#Bean
public HazelcastInstance instance() {
HazelcastInstance instance = Hazelcast.getInstance();
IMap<String, ?> sessionsMap = instance.getMap("spring:session:sessions");
if (/* something something to ensure the listener wasn't already there */) {
sessionsMap.addLocalEntryListener(new MyLocalListener());
}
}

EventSourced Saga Implementation

I have written an Event Sourced Aggregate and now implemented an Event Sourced Saga... I have noticed the two are similair and created an event sourced object as a base class from which both derive.
I have seen one demo here http://blog.jonathanoliver.com/cqrs-sagas-with-event-sourcing-part-ii-of-ii/ but feel there may be an issue as Commands could be lost in the event of a process crash as the sending of commands is outside the write transaction?
public void Save(ISaga saga)
{
var events = saga.GetUncommittedEvents();
eventStore.Write(new UncommittedEventStream
{
Id = saga.Id,
Type = saga.GetType(),
Events = events,
ExpectedVersion = saga.Version - events.Count
});
foreach (var message in saga.GetUndispatchedMessages())
bus.Send(message); // can be done in different ways
saga.ClearUncommittedEvents();
saga.ClearUndispatchedMessages();
}
Instead I am using Greg Young's EventStore and when I save an EventSourcedObject (either an aggregate or a saga) the sequence is as follows:
Repository gets list of new MutatingEvents.
Writes them to stream.
EventStore fires off new events when streams are written to and committed to the stream.
We listen for the events from the EventStore and handle them in EventHandlers.
I am implementing the two aspects of a saga:
To take in events, which may transition state, which in turn may emit commands.
To have an alarm where at some point in the future (via an external timer service) we can be called back).
Questions
As I understand event handlers should not emit commands (what happens if the command fails?) - but am I OK with the above since the Saga is the actual thing controlling the creation of commands (in reaction to events) via this event proxy, and any failure of Command sending can be handled externally (in the external EventHandler that deals with CommandEmittedFromSaga and resends if the command fails)?
Or do I forget wrapping events and store native Commands and Events in the same stream (intermixed with a base class Message - the Saga would consume both Commands and Events, an Aggregate would only consume Events)?
Any other reference material on the net for implementation of event sourced Sagas? Anything I can sanity check my ideas against?
Some background code is below.
Saga issues a command to Run (wrapped in a CommandEmittedFromSaga event)
Command below is wrapped inside event:
public class CommandEmittedFromSaga : Event
{
public readonly Command Command;
public readonly Identity SagaIdentity;
public readonly Type SagaType;
public CommandEmittedFromSaga(Identity sagaIdentity, Type sagaType, Command command)
{
Command = command;
SagaType = sagaType;
SagaIdentity = sagaIdentity;
}
}
Saga requests a callback at some point in future (AlarmRequestedBySaga event)
Alarm callback request is wrapped onside an event, and will fire back and event to the Saga on or after the requested time:
public class AlarmRequestedBySaga : Event
{
public readonly Event Event;
public readonly DateTime FireOn;
public readonly Identity Identity;
public readonly Type SagaType;
public AlarmRequestedBySaga(Identity identity, Type sagaType, Event #event, DateTime fireOn)
{
Identity = identity;
SagaType = sagaType;
Event = #event;
FireOn = fireOn;
}
}
Alternatively I can store both Commands and Events in the same stream of base type Message
public abstract class EventSourcedSaga
{
protected EventSourcedSaga() { }
protected EventSourcedSaga(Identity id, IEnumerable<Message> messages)
{
Identity = id;
if (messages == null) throw new ArgumentNullException(nameof(messages));
var count = 0;
foreach (var message in messages)
{
var ev = message as Event;
var command = message as Command;
if(ev != null) Transition(ev);
else if(command != null) _messages.Add(command);
else throw new Exception($"Unsupported message type {message.GetType()}");
count++;
}
if (count == 0)
throw new ArgumentException("No messages provided");
// All we need to know is the original number of events this
// entity has had applied at time of construction.
_unmutatedVersion = count;
_constructing = false;
}
readonly IEventDispatchStrategy _dispatcher = new EventDispatchByReflectionStrategy("When");
readonly List<Message> _messages = new List<Message>();
readonly int _unmutatedVersion;
private readonly bool _constructing = true;
public readonly Identity Identity;
public IList<Message> GetMessages()
{
return _messages.ToArray();
}
public void Transition(Event e)
{
_messages.Add(e);
_dispatcher.Dispatch(this, e);
}
protected void SendCommand(Command c)
{
// Don't add a command whilst we are in the constructor. Message
// state transition during construction must not generate new
// commands, as those command will already be in the message list.
if (_constructing) return;
_messages.Add(c);
}
public int UnmutatedVersion() => _unmutatedVersion;
}
I believe the first two questions are the result of a wrong understanding of Process Managers (aka Sagas, see note on terminology at bottom).
Shift your thinking
It seems like you are trying to model it (as I once did) as an inverse aggregate. The problem with that: the "social contract" of an aggregate is that its inputs (commands) can change over time (because systems must be able to change over time), but its outputs (events) cannot. Once written, events are a matter of history and the system must always be able to handle them. With that condition in place, an aggregate can be reliably loaded from an immutable event stream.
If you try to just reverse the inputs and outputs as a process manager implementation, it's output cannot be a matter of record because commands can be deprecated and removed from the system over time. When you try to load a stream with a removed command, it will crash. Therefore a process manager modeled as an inverse aggregate could not be reliably reloaded from an immutable message stream. (Well I'm sure you could devise a way... but is it wise?)
So let's think about implementing a Process Manager by looking at what it replaces. Take for example an employee who manages a process like order fulfillment. The first thing you do for this user is setup a view in the UI for them to look at. The second thing you do is to make buttons in the UI for the user to perform actions in response to what they see on the view. Ex. "This row has PaymentFailed, so I click CancelOrder. This row has PaymentSucceeded and OrderItemOutOfStock, so I click ChangeToBackOrder. This order is Pending and 1 day old, so I click FlagOrderForReview"... and so forth. Once the decision process is well-defined and starts requiring too much of the user's time, you are tasked to automate this process. To automate it, everything else can stay the same (the view, even some of the UI so you can check on it), but the user has changed to be a piece of code.
"Go away or I will replace you with a very small shell script."
The process manager code now periodically reads the view and may issue commands if certain data conditions are present. Essentially, the simplest version of a Process Manager is some code that runs on a timer (e.g. every hour) and depends on particular view(s). That's the place where I would start... with stuff you already have (views/view updaters) and minimal additions (code that runs periodically). Even if you decide later that you need different capability for certain use cases, "Future You" will have a better idea of the specific shortcomings that need addressing.
And this is a great place to remind you of Gall's law and probably also YAGNI.
Any other reference material on the net for implementation of event sourced Sagas? Anything I can sanity check my ideas against?
Good material is hard to find as these concepts have very malleable implementations, and there are diverse examples, many of which are over-engineered for general purposes. However, here are some references that I have used in the answer.
DDD - Evolving Business Processes
DDD/CQRS Google Group (lots of reading material)
Note that the term Saga has a different implication than a Process Manager. A common saga implementation is basically a routing slip with each step and its corresponding failure compensation included on the slip. This depends on each receiver of the routing slip performing what is specified on the routing slip and successfully passing it on to the next hop or performing the failure compensation and routing backward. This may be a bit too optimistic when dealing with multiple systems managed by different groups, so process managers are often used instead. See this SO question for more information.

Spring MessageTemplate Issue

I'm facing a problem after the migration from Spring 2.5, Flex 3.5, BlazeDS 3 and Java 6 to Spring 3.1, Flex 4.5, BlazeDS 4 and Java 7. I've declared a ClientFeed in order to send a sort of "alarm" messages to the flex client. There are three methods those alarms are sent. The first one is via snmp traps, a thread is started and wait for any trap, as one is received an alarm will be sent. The second method is via a polling mechanism, at the beginning of the web application a thread is started and will poll after a constant amount of time the alarms and send them to the client. The third method is the explicit poll command from the user, this will call a specific function on the dedicated service. This function uses then the same algorithm used in the second methods to perform a poll and shall send those alarms to the client.
The problem is that after the migration the first two methods are working without a problem, but the third one doesn't. I suspect there is a relation with the threads. Is there any known issue between messagetemplate and the threads with the new frameworks ?
Here is a snapshot of the used client feed:
#Component
public class ClientFeed {
private MessageTemplate messageTemplate;
#Autowired
public void setTemplate(MessageTemplate messageTemplate) {
this.messageTemplate = messageTemplate;
}
public void sendAlarmUpdate(final Alarm myAlarm) {
if (messageTemplate != null) {
System.out.println("Debug Thread: " + Thread.currentThread().getName());
messageTemplate.send(new AsyncMessageCreator() {
public AsyncMessage createMessage() {
AsyncMessage msg = messageTemplate.createMessageForDestination("flexClientFeed");
msg.setHeader("DSSubtopic", "Alarm");
msg.setBody(myAlarm);
return msg;
}
});
}
}
}
By the three methods I reach this piece of code and the displayed thread name are respectively: "Thread-14", "Thread-24" and "http-bio-80-exec-10".
I solved the problem by creating a local thread on the server to perform the job. Therewith the client feed is called via this new created thread instead of the http-thread.

Ncqrs: How to raise an Event without having an Aggregate Root

Given I have two Bounded Contexts:
Fleet Mgt - simple CRUD-based supporting sub-domain
Sales - which is my CQRS-based Core Domain
When a CRUD operation occurs in the fleet management, an event reflecting the operation should be published:
AircraftCreated
AircraftUpdated
AircraftDeleted
etc.
These events are required a) to update various index tables that are needed in the Sales domain and b) to provide a unified audit log.
Question: Is there an easy way to store and publish these events (to the InProcessEventBus, I'm not using NSB here) without going through an AggregateRoot, which I wouldn't need in a simple CRUD context.
If you want to publish the event about something, this something probably is an aggregate root, because it is an externally identified object about a bundle of interest, otherwise why would you want to keep track of them?
Keeping that in mind, you don't need index tables (I understand these are for querying) in the sales BC. You need the GUIDs of the Aircraft and only lookups/joins on the read side.
For auditing I would just add a generic audit event via reflection in the repositories/unit of work.
According to Pieter, the main contributor of Ncqrs, there is no way to do this out of the box.
In this scenario I don't want to go through the whole ceremony of creating and executing a command, then loading an aggregate root from the event store just to have it emit the event.
The behavior is simple CRUD, implemented using the simplest possible solution, which in this specific case is forms-over-data using Entity Framework. The only thing I need is an event being published once a transaction occurred.
My solution looks like this:
// Abstract base class that provides a Unit Of Work
public abstract class EventPublisherMappedByConvention
: AggregateRootMappedByConvention
{
public void Raise(ISourcedEvent e)
{
var context = NcqrsEnvironment.Get<IUnitOfWorkFactory>()
.CreateUnitOfWork(e.EventIdentifier);
ApplyEvent(e);
context.Accept();
}
}
// Concrete implementation for my specific domain
// Note: The events only reflect the CRUD that's happened.
// The methods themselves can stay empty, state has been persisted through
// other means anyway.
public class FleetManagementEventSource : EventPublisherMappedByConvention
{
protected void OnAircraftTypeCreated(AircraftTypeCreated e) { }
protected void OnAircraftTypeUpdated(AircraftTypeUpdated e) { }
// ...
}
// This can be called from anywhere in my application, once the
// EF-based transaction has succeeded:
new FleetManagementEventSource().Raise(new AircraftTypeUpdated { ... });

Windows Service and multithreading

Im working on a Windows Service in which I would like to have two threads. One thread should look for updates (in a RSS feed) and insert rows into a DB when updates is found.
When updates are found I would like to send notification via another thread, that accesses the DB, gets the messages and the recipients and then sends notifications.
Perhaps the best practice isn't to use two threads. Should I have db-connections in both threads?
Could anyone provide me with tips how to solve this?
The major reason to make an application or service multithreaded is to perform database or other background operations without blocking (i.e. hanging) a presentation element like a Windows form. If your service depends on very rapid polling or expects db inserts to take a very long time, it might make sense to use two threads. But I can't imagine that either would be the case in your scenario.
If you do decide to make your service multithreaded, the two major classes in C# that you want to look into are BackgroundWorker and ThreadPool. If you want to do multiple concurrent db inserts (for example, if you want to execute an insert for each of multiple RSS feeds polled at the same time), you should use a ThreadPool. Otherwise, use a BackgroundWorker.
Typically, you'd have a db access class that would have a method to insert a row. That method would create a background worker, add DoWork handler to some static method in that db access class to the background worker, then call DoWorkAsync. You should only have db connection settings in that one class in order to make maintaining the code easier. For example:
public static class DbAccess
{
public void InsertRow(SomeObject entity)
{
BackgroundWorker bg = new BackgroundWorker();
bg.DoWork += InsertRow_DoWork;
bg.RunWorkerCompleted += InsertRow_RunWorkerCompleted;
bg.RunWorkerAsync(entity);
}
private void InsertRow_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker bg = sender as BackgroundWorker;
SomeObject entity = e.Argument as SomeObject;
// insert db access here
}
private void InsertRow_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
// send notifications
// alternatively, pass the InsertRow method a
// delegate to a method in the calling class that will notify
}
}

Resources