when I create one reactive microservice, I need to add another database choise (like cassandra) but I can't customize blueprint for this specific question (I see only MongoDb right now)
How can I do it? Any suggestion?
The experimental Reactive option only has support for MongoDB at this time (v5.3.1). You can track the progress in the related issue, "Reactive/Webflux support"
Related
I come from express.js background and pretty new to loopback framework, especially loopback4 which i am using for my current project. I have gone through the loopback4 documentation few times and got some good progress in setting up the project. As the project is running as expected, I am not much convinced with project structure, Please help me to solve below problem,
As per docs, database operations should be in repositories and routes should be in controllers. Now suppose, My API consist lots of business logic along with database operations say thousand of lines. Which makes controllers routes difficult to maintain. More difficulty would arise, if some API demands version upgrade.
Is there any way to organise the code in controllers in more
scalable and reusable manner? What if i add one more service layer
between controllers and repositories and put business logic there?
how to implement it in the correct way? Is there any official way to
do that which is suggested by loopback community only?
Thanks in advance!!
Is there any way to organise the code in controllers in more scalable and reusable manner?
Yes, services can be used to abstract complex logic into its own separate class(es). Once defined, the service can be injected into the dependent controller(s) which can then call the respective service functions.
How the service is designed is dependent on the user requirements as LoopBack 4 does not necessarily enforce a strict design requirement.
I like MongoDB ok, but I was thinking about just using postgres as the read model and querying from it with graphQL. Do I have to write an adapter to do that? If so, where should I look to start?
As always, it depends 😉
Short answer: No, you can't.
Long answer: Yes, theoretically changing the read model database is possible, as wolkenkit uses an adapter-based approach. Right now MongoDB is the only implemented one, but it would be possible to write one, for whatever datastore you want to use.
Basically, the place to start is the wolkenkit-broker, which is the public API server for wolkenkit, and which also handles reading models. At the center of this there is the so-called modelStore, which acts as an abstraction layer over the specific implementation, such as the modelStoreMongoDb adapter.
GraphQL again is currently not supported out of the box. We use our own approach, implemented in the tailwind module. The place to start here is the HTTP server API.
Please note that I am one of the developers of wolkenkit, so please take my answer with a grain of salt.
I notice that JHipster microservices have their own Auditing viz. PersistentAuditEvent it seems easier to use than say AuditEventRepository which only has add and some limited find methods.
I want to save an Event of a task being run with a role of SYSTEM and identify it by something like type:executedLongQuery
Then in future I want to check the last run of this query and decide whether we need to run in again for report generation then log an event again if it is run. It seems to me PersistentAuditEvent offered by JHipster is the best way to go.
I don't see a PersistentAuditEventRepository or any suitable implementation within the microservice so if I can get a documentation with example that would be very helpful. Even a clue in the right direction could help me start.
I found the repository interface and a custom implementation in JHipster Gateway which is not present in microservice. It was easy to simply copy it over to microservice and use the repository. Ofcourse here I am using a database in the microservice, an empty one, which still adds the migrations as well as Audit tables.
As we all know that mongooplog tool is going to be removed in upcoming releases. I needed help about some the following issue:
I was planning to create a listener using mongooplog which will read any kind of activity on mongodb and will generate a trigger according to activity which will hit another server. Now, since mongooplog is going out, can anyone suggest what alternative can I use in this case and how to use it.
I got this warning when trying to use mongooplog. Please let me know if you any further questions.
warning: mongooplog is deprecated, and will be removed completely in a future release
PS: I am using node.js framework to implement the listener. I have not written any code yet so have no code to share.
The deprecation message you are quoting only refers to the mongooplog command-line tool, not the general approach of tailing the oplog. The mongooplog tool can be used for some types of data migrations, but isn't the right approach for a general purpose listener or to wrap in your Node.js application.
You should continue to create a tailable cursor to follow oplog activity. Tailable cursors are supported directly by the MongoDB drivers. For an example using Node.js see: The MongoDB Oplog & Node.js.
You may also want to watch/upvote SERVER-13932: Change Notification Stream API in the MongoDB issue tracker, which is a feature suggestion for a formal API (rather than relying on the internal oplog format used by replication).
I am researching to start a new project based on Liferay.
It relies on a system that will require its own data model and a certain agility and flexibility in data management as well as its visualization.
These are my options:
Using Liferay Expando fields and define their own data models. I must do all the view layer.
Using Liferay ECMS adding patches creating structures and hooks that allow me to define data models Master - Detail. It makes much easier viewing issue (velocity templates), but perhaps is the most "dirty" way.
Generating data layer and access to services with Hibernate and Spring. (using Service Factory, for example).
Liferay Service Builder would be similar to the option of creating the platform with Hibernate and Spring.
CRUD generation systems as OpenXava or your XMLPortletFactory
And now my question, what is your advice? What advantages or disadvantages do you think would provide one or another option?
Thanks in advance.
I can't speak for the other CRUD generation systems but I can tell you about the Liferay approaches.
I would take a hybrid approach.
First, I would create the required data models as best as I can with the current requirements in Liferay Service Builder and maintain them there as much as possible. This would require that you rebuild and redeploy your plugin every time you changed the data model but would greatly enhance performance compared to all the other Liferay approaches you've mentioned. Service Builder in that regard is much more rigid and cannot be changed via GUI.
However, in the event for some reason you cannot use Service Builder to redefine your data models and you need certain aspects of it the be changed via GUI, you can also use Expandos to extend the models you've created with Service Builder. So, it is the best of both worlds.
On the other option, using the ECMS would be a specialized case and I would only take this approach if there is a particular requirement it satisfies (like integration with the ECMS).
With that said, Liferay provides you many different ways to create your application. It ultimately depends on how you're going to use your application.