Multithreading message broker in a transactional system - Best Practice - multithreading

We have 2 systems that exchange tickets base on a transactional flow, they have an order in the ticket statuses if one status does not reach one system all the flow is stuck.
The problem is that we use a multi threaded, load balanced message broker between this systems and we might have cases when an update1 status can be processed faster than a create, or update2 faster than update1.
I'm look for a best practice for this kind of integration.

This sounds like you need to implement the Scatter-Gather EIP:
http://www.eaipatterns.com/BroadcastAggregate.html
In IBM Integration Bus or WebSphere Message Broker you can set aggregation timeouts to make sure you only proceed with the post aggregation flow if you have all the components of the aggregation.
Any parts of the aggregation which don;t turn up can be timed out and processed separately.

Related

CDC vs Message Broker Key differences and which to use when

I have been struggling to find any key pros and cons on using one over the other. When it comes to sharing data between two microservices. Especially when it comes to scale.
What my assumption and question is - if we use a CDC to queue & CDC (Queue) subscriber combination, we can more or less can get rid of the need to publish to the message queue from our application layer (which might be prone to more human errors).
I went into this thought process when evaluating Mongodb "changestreams" and have been curious ever since.
When using CDC in this way, you're basically turning your microservice's database into a message broker. That has the advantage of not requiring a separate message broker. It has the disadvantages of deeply coupling the consuming microservices to the producing microservice, especially since every new consuming microservice will effectively impose some extra load on the source microservice's database.
CDC can be a reliable way to feed a pubsub topic on a message broker, however, though it's probably best to recognize that the CDC still means a coupling between the source microservice's internal data model and the data model for interservice communication, which tends to mean changes to one require changes to all. Since one of the primary (and arguably the only always-valid-in-general) reasons to adopt microservices is to allow changes with minimal coordination, it might be advised to have the CDC feed a single service which is responsible for translating the CDC records into the wire model (e.g. domain events with an agreed upon schema).

CQRS and Event Sourcing Guide

I want to create a CQRS and Event Sourcing architecture that is very cheap and very flexible and very uncomplicated.
I want to make sure that events never fail to at least reach the publisher/event store, ever, ever, because that's where business is.
Now, i have several options in mind:
Azure
With azure, i seem to not know what to use.
Azure service bus
Azure Function
Azure webjob (i suppose this can be replaced with Azure functions)
?? (something else i forgot or dont know?)
How reliable are these azure server-less solutions??
Custom
For this i am thinking of using RabbitMQ, the problem is the cost of a virtual machine to run it.
All in all, i want:
Ability to replay the messages/events in case of failure.
Ability to easily add subscribers.
Ability to select the subscribers upon which to replay the messages.
The Event store should be able to store very large sizes of event messages (or how else shall queue an image or file??).
The event store MUST NEVER EVER get chocked, or sleep.
Speed of implementation/prototyping would be an added
advantage.
What does your experience suggest?
What about other alternatives? (eg: apache-kafka)?
Why not run Event Store? Created by Greg Young himself. Host where you need.
I am a java user, I have been using hornetq (aka artemis which I dont use) an alternative to rabbitmq for the longest; the only problem is it does not support replication but gets the job done when it comes to eventsourcing. For your custom scenario, rabbitmq is a good choice but try running it on a digital ocean instance for low costs. If you are looking for simplicity and flexibility you have only 2 choices , build your own or forgo simplicity and pick up apache kafka with all its complexities but will give you flexibility. Again you can also build an eventstore with mongodb. https://www.mongodb.com/blog/post/event-sourcing-with-mongodb
Your requirements are too vague to make the optimal choice. You need to consider a lot of things, one of them would be, for instance, the numbers of events per one aggregate, the number of aggregates (note that this has to be statistical). Those are important primarily because if you allow tens of thousands of events for each aggregate then you would need to have snapshotting which adds complexity which you might not need.
But for regular use cases you could just use a relational database like Postgres as your (linearizable) event store. It also has a listen/notify functionality to you would not really need any message bus either and your application could be written in a reactive way.

Advanced Oracle AQ Dequeue Order

I have a Java application, which uses an Oracle Queue to store messages in the queue for later processing by multiple threads consuming queued messages. The messages in this queue can be related to each other, and must therefore be processed in a specific order based on the business logic of my application. Basically, I want to achieve that the dequeueing of one message A is held back as long as another message B in the queue has not been completely processed. The only weapon given by Oracle AQ I see here, are the Delay and an Priority parameters. These, however, cannot be used to achieve the scenario outlined above, since there are situations, where two related messages still can be dequeued and processed at the same time. Are there any tools that can help establishing an advanced processing order of messages?
I came to the conclusion that it is not a good idea to order these messages using the queue, because it would need a custom and very specialized dequeue strategy, which has a very bad smell to me, both, complexity and most likely performance wise. It also tries to fix communication protocol issues using the queue, which are application specific and therefore should find treatment in the application itself. Instead, the application / communication protocol should be tolerant enough to handle ordering issues.

failover support in spring-integration

I have a spring integration process that is used to integrate two systems in near real time expectation.
I need to build a fail-over process for this that may run on same or another machine.
Is there a inbuilt support for this in spring-integration?
If not, some ideas to implement this would be greatly helpful.
I am thinking some sort of heartbeat messages on a message channel and if they don't arrive within a stipulated time-frame, activate the workflow, but i don't know how these can be achieved in spring-integration.
You need to provide more details - types of communication etc but, generally, yes, it can be configured for failover - the default DirectChannel uses round-robin distribution between consumers by default, but you can configure it with a dispatcher that has load-balancer="NONE". Then it will always try the first consumer and failover to the second on failure. You can also configure a circuit breaker advice on the first consumer so we fail fast (for some period of time) and only retry the first consumer once in a while.
As I said, if you can provide more details of your actual requirements, we can help with more specific answers.

Messaging bus + event storage + PubSub

I'm looking at building an application which has many data sources, each of which put events into my system. Events have a well defined data structure and could be encoded using JSON or XML.
I would like to be able to guarantee that events are saved persistently, and that the events are used as a part of a publish/subscribe bus with multiple subscribers possible per event.
For the database, availability is very important even as it scales to multiple nodes, and partition tolerance is important so that I can scale the number of places which can store my events. Eventual consistency is good enough for me.
I was thinking of using a JMS enterprise messaging bus (e.g. Mule) or an AMQP enterprise messaging bus (such as RabbitMQ or ZeroMQ).
But for my application, it seems that if I could set up a publish subscribe system with CouchDB or something similar, it would solve my problem without having to integrate a enterprise messaging bus and a persistent storage system.
Which would work better, CouchDB + scaling + loadbalancing + some kind of PubSub mechanism, or an explicit PubSub messaging system with attached eventually-consistent , Available, partition-tolerant storage? Which one is easier to set up, administer, and operate? Which solution will have high throughput for a given cost? Why?
Also, are there any more questions I should ask before selecting my technologies? (BTW, Java is the server-side and client-side language).
I am using a CouchDB message queue in production. (It is not pub/sub, so I do not consider this answer complete.)
Currently (June 2011), CouchDB has huge potential as a messaging substrate:
Good data persistence
Well-poised for clustering (on a LAN, using BigCouch or Lounge)
Well-poised for distribution (between data centers, world-wide)
Good platform. Despite the shortcomings listed below, I love CQS because I can re-use my DB and it works from Erlang, NodeJS, and every web browser.
The _changes query
Continuous feeds, instant delivery without polling
Network going down is no problem, just retry later from the previous position
Still, even a low-volume message system in CouchDB requires careful planning and maintenance. CouchDB is potentially a great messaging server. (It is inspired by Lotus notes, which handles high email volume.)
However, these are the challenges with CouchDB:
Append-only database files grow fast
Be mindful about disk capacity
Be mindful about disk i/o. Compaction will read and re-write all live documents
Deleted documents are not really deleted. They are marked deleted=true and kept forever, even after compaction! This is in fact uniquely good about CouchDB, because the deleted action will propagate through the cluster, even if the network goes down for a time.
Propagating (replicating) deletes is great, but what about the buildup of deleted docs? Eventually it will outstrip everything else. The solution is to purge them, which actually removes them from disk. Unfortunately, if you do 2 or more purges before querying a map/reduce view, the view will completely rebuild itself. That may take too much time, depending on your needs.
As usual, we hear NoSQL databases shouting "free lunch!", "free lunch!" while CouchDB says "you are going to have to work for this."
Unfortunately, unless you have compelling pressure to re-use CouchDB, I would use a dedicated messaging platform. I had a good experience with ejabberd as a messaging platform and to communicate to/from Google App Engine.)
I think that the best solution would be CouchDB + Jabber/XMPP server (ejabberd) + book: http://professionalxmpp.com
JSON is the natural storing mechanism for CouchDB
Jabber/XMPP server includes pubsub support
The book is a must read
While you can use a database as an alternative to a message queueing system, no database is a message queuing system, not even CouchDB. A message queueing system like AMQP provides more than just persistence of messages, in fact with RabbitMQ, persistence is just an invisible service under the hood that takes care of all of the challenges that you have to deal with by yourself on CouchDB.
Take a good look at the RabbitMQ website where there is lots of information about AMQP and how to make use of it. They have done a great job of collecting together articles and blogs about message queueing.

Resources