Caliburn Micro 2 EventAggregator PublishOnBackgroundThread - multithreading

Can anyone explain any reason why and when should I use PublishOnBackgroundThread instead of PublishOnUIThread.
I cannot find any use cases for usage PublishOnBackgroundThread and I am not sure what method should I use?

It really depends on the type of the message you're publishing.
If you're using the EventAggregator to surface a message from a low laying service back the UI then PublishOnUIThread makes the most sense since you'll be updating the UI when handling the message. The same applies when you're using it to communicate between view models.
Conversely sometimes it get used for view models to publish events that an underlying service is listening to (rather than the view model depending on that service).
That service may perform some expensive work which makes sense to happen on a background thread. Personally I'd have gone in the background service to push that work onto a background thread but different people want different options.
Ultimately the method was included for completeness.

Related

Is a Service provider really necessary in NestJS?

I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control

CQRS and Event Sourcing Guide

I want to create a CQRS and Event Sourcing architecture that is very cheap and very flexible and very uncomplicated.
I want to make sure that events never fail to at least reach the publisher/event store, ever, ever, because that's where business is.
Now, i have several options in mind:
Azure
With azure, i seem to not know what to use.
Azure service bus
Azure Function
Azure webjob (i suppose this can be replaced with Azure functions)
?? (something else i forgot or dont know?)
How reliable are these azure server-less solutions??
Custom
For this i am thinking of using RabbitMQ, the problem is the cost of a virtual machine to run it.
All in all, i want:
Ability to replay the messages/events in case of failure.
Ability to easily add subscribers.
Ability to select the subscribers upon which to replay the messages.
The Event store should be able to store very large sizes of event messages (or how else shall queue an image or file??).
The event store MUST NEVER EVER get chocked, or sleep.
Speed of implementation/prototyping would be an added
advantage.
What does your experience suggest?
What about other alternatives? (eg: apache-kafka)?
Why not run Event Store? Created by Greg Young himself. Host where you need.
I am a java user, I have been using hornetq (aka artemis which I dont use) an alternative to rabbitmq for the longest; the only problem is it does not support replication but gets the job done when it comes to eventsourcing. For your custom scenario, rabbitmq is a good choice but try running it on a digital ocean instance for low costs. If you are looking for simplicity and flexibility you have only 2 choices , build your own or forgo simplicity and pick up apache kafka with all its complexities but will give you flexibility. Again you can also build an eventstore with mongodb. https://www.mongodb.com/blog/post/event-sourcing-with-mongodb
Your requirements are too vague to make the optimal choice. You need to consider a lot of things, one of them would be, for instance, the numbers of events per one aggregate, the number of aggregates (note that this has to be statistical). Those are important primarily because if you allow tens of thousands of events for each aggregate then you would need to have snapshotting which adds complexity which you might not need.
But for regular use cases you could just use a relational database like Postgres as your (linearizable) event store. It also has a listen/notify functionality to you would not really need any message bus either and your application could be written in a reactive way.

EventSourcing gateways (synchronize with external systems)

Are there best practices for implementation of eventsourcing gateways? The gateway is meant as infrastructure or service which allows to generate a set of events, proceeding from the status returned by some external service.
Even if application based on eventsourcing, some external uncontrollable entitles can still be present. For example, you want to synchronize users list from Azure AD, and perform prompt to service, which return users list. Then you get users list from projection, make difference with external state, and produce events to fill this difference.
Or your application is online-shop, and you should import actual USD/EUR/bitcoin ranks for showing prices. Gateway can poll some currencies provider and produce event. In simple case it's very easy, but if projection state is more complex structure, trivial import is not obvious.
Maybe is there common approach for this case?
Building integration adapters that use poll-emit is normal and I personally prefer this way of doing integrations in general.
However, this has little to do with event sourcing, since what you actually need to solve your integration problems is to simulate the desired functionality that the external system will emit events on its own and you can build a reactive system that consumes these events.
When these events come to your system from the adapter - you can do whatever you want with them but essentially, event sourcing assumes that you store your own object's state in event streams but in case the event comes from some external system - it is not your state. You can derive your system state from external events but these will be your own events.

When do I need to use worker processes in Heroku

I have a Node.js app with a small set of users that is currently architected with a single web process. I'm thinking about adding an after save trigger that will get called when a record is added to one of my tables. When that after save trigger is executed, I want to perform a large number of IO operations to external APIs. The number of IO operations depends on the number of elements in an array column on the record. Thus, I could be performing a large number of asynchronous operations after each record is saved in this particular table.
I thought about moving this work to a background job as suggested in Worker Dynos, Background Jobs and Queueing. The article gives as a rule of thumb that tasks that take longer than 500 ms be moved to background job. However, after working through the example using RabbitMQ (Asynchronous Web-Worker Model Using RabbitMQ in Node), I'm not convinced that it's worth the time to set everything up.
So, my questions are:
For an app with a limited amount of concurrent users, is it ok to leave a long-running function in a web process?
If I eventually decide to send this work to a background job it doesn't seem like it would be that hard to change my after save trigger. Am I missing something?
Is there a way to do this that is easier than implementing a message queue?
For an app with a limited amount of concurrent users, is it ok to leave a long-running function in a web process?
this is more a question of preference, than anything.
in general i say no - it's not ok... but that's based on experience in building rabbitmq services that run in heroku workers, and not seeing this as a difficult thing to do.
with a little practice, you may find that this is the simpler solution, as I have (it allows simpler code, and more robust code, as it splits the web away from the background processor - allowing each to run without knowing about each other directly)
If I eventually decide to send this work to a background job it doesn't seem like it would be that hard to change my after save trigger. Am I missing something?
are you missing something? not really
as long as you write your current in-the-web-process code in a well structured and modular fashion, moving it to a background process is not usually a big deal
most of the panic that people get from having to move code into the background, comes from having their code tightly coupled to the HTTP request / response process (i know from personal experience how painful it can be)
Is there a way to do this that is easier than implementing a message queue?
there are many options for distributed computing and background processing. i personally like RabbitMQ and the messaging patterns that it uses.
i would suggest giving it a try and seeing if it's something that can work well for you.
other options include redis with pub/sub libraries on top of it, using direct HTTP API calls to another web server, or just using a timer in your background process to check database tables on a given frequency and having the code run based on the data it finds.
p.s. you may find my RabbitMQ For Developers course of interest, if you are wanting to dig deeper into RMQ w/ node: http://rabbitmq4devs.com

Making code in Liferay Model Listeners Asynchronous (using concurrency)

The Problem
Our liferay system is the basis to synchronize data with other web-applications.
And we use Model Listeners for that purpose.
There are a lot of web-service calls and database updates through the listeners and consequently the particular action in liferay is too slow.
For example:
On adding of a User in liferay we need to fire a lot of web-service calls to add user details and update other systems with the userdata, and also some liferay custom tables. So the adding of User is taking a lot of time and in a few rare cases the request may time-out!
Since the code in the UserListener only depends on the User Details and even if there is any exception in UserListener still the User would be added in Liferay, we have thought of the following solution.
We also have a scheduler in liferay which fixes things if there was some exception while executing code in Listeners.
Proposed Solution
We thought of making the code in UserListener asynchronous by using Concurrency API.
So here are my questions:
Is it recommended to have concurrent code in Model Listeners?
If yes, then will it have any adverse effect if we also update Liferay custom tables through this code, like transactions or other stuff?
What can be other general Pros and Cons of this approach?
Is there any other better-way we can have real-time update to other systems without hampering User-experience?
Thank you for any help on this matter
It makes sense that you want to use Concurrency to solve this issue.
Doing intensive work like invoking web services etc in the thread that modifies the model is not really a good idea, apart from the impact it will have on user experience.
Firing off threads within the models' listeners may be somewhat complex and hard to maintain.
You could explore using Liferay's Message Bus paradigm where you can send a message to a disconnected message receiver which will then do all the intensive work outside of the model listener's calling thread.
Read more about the message bus here:
Message Bus Developer Guide
Message Bus Wiki

Resources