I have a spring integration process that is used to integrate two systems in near real time expectation.
I need to build a fail-over process for this that may run on same or another machine.
Is there a inbuilt support for this in spring-integration?
If not, some ideas to implement this would be greatly helpful.
I am thinking some sort of heartbeat messages on a message channel and if they don't arrive within a stipulated time-frame, activate the workflow, but i don't know how these can be achieved in spring-integration.
You need to provide more details - types of communication etc but, generally, yes, it can be configured for failover - the default DirectChannel uses round-robin distribution between consumers by default, but you can configure it with a dispatcher that has load-balancer="NONE". Then it will always try the first consumer and failover to the second on failure. You can also configure a circuit breaker advice on the first consumer so we fail fast (for some period of time) and only retry the first consumer once in a while.
As I said, if you can provide more details of your actual requirements, we can help with more specific answers.
Related
I've been implementing an SQS service(AWS) for my project. My purpose for this implement is I have 2 projects (microservice) and I want to sync data from one project to another. So, I intend to use SQS service but I also think about webhook for solving my case. I know some basics of the pros and cons of them. So, my question is should I use a webhook or SQS for my case?
Thanks for any helping!
First of all, if you wish to sync 2 databases you would probably want something that's not accounting on your service. Try reading about change data capture - Log scanners is a safe way to do that. Debezium - is a strong tool for it.
Second, if you wish to go with your own implementation I would suggest going with the queueing approach. The biggest advantage of it will be incased when the second service is down. While if using Webhooks the information will be lost, using queues (SQS or any other) will keep the data until the service is up again.
SQS is your best bet here. Couple of reasons
- Reliability in case something is down.
- Ability to repopulate other micro-services. For example if you decide to create another microservice and you need to populate data since start, you will probably read everything from service 1 and put it in the queue for the new micro service.
- Scalability - Queues makes your architecture horizontally scalable. Just put machines to do the work while reading it from queues in parallel.
I want to create a CQRS and Event Sourcing architecture that is very cheap and very flexible and very uncomplicated.
I want to make sure that events never fail to at least reach the publisher/event store, ever, ever, because that's where business is.
Now, i have several options in mind:
Azure
With azure, i seem to not know what to use.
Azure service bus
Azure Function
Azure webjob (i suppose this can be replaced with Azure functions)
?? (something else i forgot or dont know?)
How reliable are these azure server-less solutions??
Custom
For this i am thinking of using RabbitMQ, the problem is the cost of a virtual machine to run it.
All in all, i want:
Ability to replay the messages/events in case of failure.
Ability to easily add subscribers.
Ability to select the subscribers upon which to replay the messages.
The Event store should be able to store very large sizes of event messages (or how else shall queue an image or file??).
The event store MUST NEVER EVER get chocked, or sleep.
Speed of implementation/prototyping would be an added
advantage.
What does your experience suggest?
What about other alternatives? (eg: apache-kafka)?
Why not run Event Store? Created by Greg Young himself. Host where you need.
I am a java user, I have been using hornetq (aka artemis which I dont use) an alternative to rabbitmq for the longest; the only problem is it does not support replication but gets the job done when it comes to eventsourcing. For your custom scenario, rabbitmq is a good choice but try running it on a digital ocean instance for low costs. If you are looking for simplicity and flexibility you have only 2 choices , build your own or forgo simplicity and pick up apache kafka with all its complexities but will give you flexibility. Again you can also build an eventstore with mongodb. https://www.mongodb.com/blog/post/event-sourcing-with-mongodb
Your requirements are too vague to make the optimal choice. You need to consider a lot of things, one of them would be, for instance, the numbers of events per one aggregate, the number of aggregates (note that this has to be statistical). Those are important primarily because if you allow tens of thousands of events for each aggregate then you would need to have snapshotting which adds complexity which you might not need.
But for regular use cases you could just use a relational database like Postgres as your (linearizable) event store. It also has a listen/notify functionality to you would not really need any message bus either and your application could be written in a reactive way.
I have a Java application, which uses an Oracle Queue to store messages in the queue for later processing by multiple threads consuming queued messages. The messages in this queue can be related to each other, and must therefore be processed in a specific order based on the business logic of my application. Basically, I want to achieve that the dequeueing of one message A is held back as long as another message B in the queue has not been completely processed. The only weapon given by Oracle AQ I see here, are the Delay and an Priority parameters. These, however, cannot be used to achieve the scenario outlined above, since there are situations, where two related messages still can be dequeued and processed at the same time. Are there any tools that can help establishing an advanced processing order of messages?
I came to the conclusion that it is not a good idea to order these messages using the queue, because it would need a custom and very specialized dequeue strategy, which has a very bad smell to me, both, complexity and most likely performance wise. It also tries to fix communication protocol issues using the queue, which are application specific and therefore should find treatment in the application itself. Instead, the application / communication protocol should be tolerant enough to handle ordering issues.
We have 2 systems that exchange tickets base on a transactional flow, they have an order in the ticket statuses if one status does not reach one system all the flow is stuck.
The problem is that we use a multi threaded, load balanced message broker between this systems and we might have cases when an update1 status can be processed faster than a create, or update2 faster than update1.
I'm look for a best practice for this kind of integration.
This sounds like you need to implement the Scatter-Gather EIP:
http://www.eaipatterns.com/BroadcastAggregate.html
In IBM Integration Bus or WebSphere Message Broker you can set aggregation timeouts to make sure you only proceed with the post aggregation flow if you have all the components of the aggregation.
Any parts of the aggregation which don;t turn up can be timed out and processed separately.
I have scoured the Internet, posted to the Spring forums, and read nearly the whole of the online documentation, but I cannot figure out whether Spring Integration can process more than one message within a single multi-resource (JTA) transaction. This is critical for my purposes, in order to get the throughput necessary. Does anyone know if this is possible? (And a little guidance on how to make it work would be appreciated.)
Once a transaction is started, as long as you don't pass a thread boundary all work will remain in that transaction.
This means that, if your transaction manager supports multi-resource transactions and you avoid introducing concurrency within the transaction, you will be OK.
In other words: it depends, but it is possible.