Send a PUT/GET/POST request to JHipster in one single transaction - jhipster

I am quite new to Jhipster and have problem understanding some of its functionalities. Hence here is my question.
I have the following two microservices.
Microservice 1 (MS1) has the following data structures in Java:
Lead {
Customer customer;
Deal deal;
}
Customer{
Integer phoneNumber;
etc...
}
Deal{
Integer value;
etc...
}
Microservice 2 (MS2) is a JHipster generated database.
The DB only has the following SQL tables :
CUSTOMER
LEAD
When changes happen in Microservice 1, I send 2 separate PUT requests from MS1 to MS2.
first a request to update CUSTOMER through the /customer API in MS2
if update is OK, then send a request to update DEAL /deal API in MS2
For a successful update for Lead, PUT requests to Customer, Deal should all be OK. If updating one table fails, all should fail.
Hence, I would like to avoid sending 2 separate requests to avoid a case where CUSTOMER request is OK and DEAL request fails for whatever reason.
If possible, I would like to send one single transaction throught an API such as /lead that udpates the two table..
What is the best way I can achieve this without creating an extra table for LEAD?
e.g., a layer/service that I should generate using Jhipster.
If possible (but not necessary), I would like to avoid touching code that are frequently regenerated. (e.g., Customer, Deal)
Please kindly direct me to a documentation too if one already exist. They are quite hard to understand so I am not sure if any current one specifically addresses this problem. Thank you.

This is a common issue when directly exposing JPA entities from a CRUD REST API.
Your persistence model does not need to be your API model.
If 2 entities are related and should be updated within same transaction, it means that they should be updated with one atomic API request.
So, you could define a new resource with a DTO combining your 2 entities, exposed by a new API that you would code manually (so no need for an additional table).
As you are using microservices architecture, you could have similar situation also between MS1 and MS2 and here you could not use a transaction, you could then have to implement remediation.

Related

Where should calculations be done in a MEAN stack app

I am building an ecommerce website for a project for my portfolio, and I wanted to know where the calculations should be done for the cart.
Normally I use react and I create a model folder, route folder and a controller folder but the way I was taught Angular it seems like the services acts like the routes and the actual calls to the database are done in the node server file which I am sure I could separate into a separate controller file. My question is where should the calculations for the cart be done before I send the order to the database? I thought about doing it in the cart component before the order is place or should it be done in the services or in the backend in the controller? I am just trying to figure out what is the standard
When writing an Angular app, I think it is important to adhere to the following principles that:
Components - should have a single responsibility for simple view logic only, shouldn't reach out to the server, and shouldn't do complex calculations and/or logic that is not related to the view.
Services - should have a single responsibility for (reusable/shared) and complex logic, to do outbound communication and reach out to the server, and to act as data stores (using BehaviorSubjects).
Therefore, if your calculations are needed to update the view of the cart, I would vote that these calculations need to be made at the component. If your calculations are needed to update the items or the request to be sent to the server, they need to be made at the service.
Remember, the component "shouldn't know" how the data comes to it or how it is manipulated or sent to the server. The component should only know, given any data - how to present it in the view, and shouldn't "worry about" how that data came to it. Similarly, the component shouldn't know how the data is calculated before being sent to the server, and this would fall within the responsibility of the service that works with and processes the cart data and builds the request to the server.
However, you have to always consider the security of your app, and if a malicious data modification at the client side can affect your cart. If such caculations affect the app's security - they should at least be validated at the server if not fully delegated to it.
I don't know the calculation you need exactly, but since it is an e-commerce website I assume it is simple math such as the total payment amount of checkout.
The main role of the server is communicating to the database. If a task does not involve interacting with data, you can do the calculations on the client-side. Leaving details on client-side allows you to have access to details of your formula, and reduce the communication time between client and server.

Best Practice Advice - Loopback API

I want to make a webservice and it looks like Loopback is good starting point.
To explain my question, I will describe situation
I have 2 MySQL Tables:
Users
Companies
Every User has it's Company. It's like master User for it's company.
I wish to create Products table for each company next way:
company1_Products,
company2_Products,
company3_Products
Each company have internal Users, like:
company1_Users,
company2_Users,
company3_Users
Internal users are logging in from corresponding subdomain, like
company1.myservice.com
company2.myservice.com
For the API, I want datasource to get Products from the corresponding table. So the question is, how to change datasource dynamically?
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
Maybe there's better way to do such models?
Disclaimer: I am co-author and one of current maintainers of LoopBack.
how to change datasource dynamically?
The following StackOverflow answer describes a solution how to attach a single model (e.g. Product) to multiple datasources: https://stackoverflow.com/a/28327323/69868 This solution would work if you were creating one MySQL database per company instead of using company's name as the prefix of Product table name.
To achieve what you described, you can use model subclassing. For each company, define a new company-specific Product model inheriting from the shared Product model and changing the table name.
// common/models/company1-product.json
{
"name": "Company1_Product",
"base": "Product",
"mysql": {
"tableName": "company1_Products"
}
// etc.
}
You can even create these models on the fly using app.registry.createModel() and app.model() APIs, and then run dataSource.autoupdate to create SQL tables for the new model(s).
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
I suppose you can use the same approach as you do for Products and as you described in your question.
Maybe there's better way to do such models?
The problem you are facing is calling multi-tenancy. I am afraid we haven't figured out an easy to use solution yet. There are many possible ways how to implement multi-tenancy.
For example, you can create one LoopBack application for each Company (tenant) and then create a top-level LoopBack or Express application to route incoming requests to appropriate tenant-specific LB app instance. See the following repository for a proof-of-concept implementation: https://github.com/strongloop/loopback-multitenant-poc

Application-side join ORM for Node?

To start: I've tried Loopback. Loopback is nice but does not allow for relations across multiple REST data services, but rather makes a call to the initial data service and passes query parameters that ask it to perform the joined query.
Before I go reinventing the wheel and writing a massive wrapper around Loopback's loopback-rest-connector, I need to find out if there are any existing libraries or frameworks that already tackle this. My extensive Googling has turned up nothing so far.
In a true microservice environment, there is a service per database.
http://microservices.io/patterns/data/database-per-service.html
From this article:
Implementing queries that join data that is now in multiple databases
is challenging. There are various solutions:
Application-side joins - the application performs the join rather than
the database. For example, a service (or the API gateway) could
retrieve a customer and their orders by first retrieving the customer
from the customer service and then querying the order service to
return the customer’s most recent orders.
Command Query Responsibility Segregation (CQRS) - maintain one or more
materialized views that contain data from multiple services. The views
are kept by services that subscribe to events that each services
publishes when it updates its data. For example, the online store
could implement a query that finds customers in a particular region
and their recent orders by maintaining a view that joins customers and
orders. The view is updated by a service that subscribes to customer
and order events.
EXAMPLE:
I have 2 data microservices:
GET /pets - Returns an object like
{
"name":"ugly",
"type":"dog",
"owner":"chris"
}
and on a completely different microservice....
GET /owners/{OWNER_NAME} - Returns the owner info
{
"owner":"chris",
"address":"under a bridge",
"phone":"123-456-7890"
}
And I have an API-level microservice that is going to call these two data services. This is the microservice that I will be applying this at.
I'd like to be able to establish a model for Pet such that, when I query pet, upon a successful response from GET /pets, it will "join" with owners (send a GET /owners/{OWNERS_NAME} for all responses), and to the user, simply return a list of pets that includes their owner's data.
So GET /pets (maybe something like Pets.find()) would return
{
"name":"ugly",
"type":"dog",
"owner": "chris",
"address": "under a bridge",
"phone": "123-456-7890"
}
Applying any model/domain logic on your API-gateway is bad decision, and considered as bad practice. API Gateway should only do your systems's CAS (with relying onto Auth service which holds the logic), And convert incoming external requests into inner system requests (different headers/ requester payload data) and proxy formatted requests to services for any other work, recieves them, cares about encapsulating errors, and presents every response in proper external form.
Another point, if there is alot of joins between two models required for application core flow (validation/scoping etc) then perhaps you should reconsider to which Business Domain your models/services are bound. If it's same BD perhaps they should be together. Priciples of Domain-Driven-Design helped me to understand where real boundaries between micro-services are.
If you work with loopback (like we are and face same problem we faced - that loopback have no proper join implementation) you can have separate Report/Combined data service, which is only one who can access to all the service databases and does it only for READ purposes - i.e. queries. Provide it with separately set-up read-only wide access to the db - instead of having only one datasource being set up (single database) it should be able to read from all the databases which are in scope of this query-join db user.
Such service should able to generate proper joins with expected output schema from configuration json - like loopback models (thats what I did in same case). Once abstraction is done it's pretty simple to build/add any equery with any complex joins. It's clean, and it's easy to reason about. Also, it's DBA friendly. For me such approach worked well so far.

Fetching Initial Data from CloudKit

Here is a common scenario: app is installed the first time and needs some initial data. You could bundle it in the app and have it load from a plist or something, or a CSV file. Or you could go get it from a remote store.
I want to get it from CloudKit. Yes, I know that CloudKit is not to be treated as a remote database but rather a hub. I am fine with that. Frankly I think this use case is one of the only holes in that strategy.
Imagine I have an object graph I need to get that has one class at the base and then 3 or 4 related classes. I want the new user to install the app and then get the latest version of this class. If I use CloudKit, I have to load each entity with a separate fetch and assemble the whole. It's ugly and not generic. Once I do that, I will go into change tracking mode. Listening for updates and syncing my local copy.
In some ways this is similar to the challenge that you have using Services on Android: suppose I have a service for the weather forecast. When I subscribe to it, I will not get the weather until tomorrow when it creates its next new forecast. To handle the deficiency of this, the Android Services SDK allows me to make 'sticky' services where I can get the last message that service produced upon subscribing.
I am thinking of doing something similar in a generic way: making it possible to hold a snapshot of some object graph, probably in JSON, with a version token, and then for initial loads, just being able to fetch those and turn them into CoreData object graphs locally.
Question is does this strategy make sense or should I hold my nose and write pyramid of doom code with nested queries? (Don't suggest using CoreData syncing as that has been deprecated.)
Your question is a bit old, so you probably already moved on from this, but I figured I'd suggest an option.
You could create a record type called Data in the Public database in your CloudKit container. Within Data, you could have a field named structure that is a String (or a CKAsset if you wanted to attach a JSON file).
Then on every app load, you query the public database and pull down the structure string that has your classes definitions and use it how you like. Since it's in the public database, all your users would have access to it. Good luck!

Service Stack migrating RPC to REST issues

Trying to sell a move to ServiceStack from traditional ASP.Net /SOAP web services with the management team.
I am struggling with a some RPC'ish issues. Requirement is that I support SOAP (even backhandedly) in the hope of selling my service consumers on REST.
Take for example a service called "ReplaceItem" which basically requires:
Close out item number
Replacement item number
Store Number
Bunch of other replacement item data
Should I create a ReplacementItem DTO? It seems to be if I have a number of these type of functions I am just going to have tons of DTOs instead of tons of RPC methods. Plus what is the "id" in this case and what REST method would I be using?
I get that REST/SS gives me basic CRUD functionality for domain level structures like Items/Customers/etc, but how do I handle non-CRUD methods in SS.
I am also having issues with multiple parameters making up the primary key for a certain service. Almost all Inventory tables are structured by Item Number AND Store Number. I'd rather not dump the creation of some composite string on the service client. How do I handle this?
Thanks.
ServiceStack promotes a SOA-like message-based design that is optimal and provides many natural benefits for remote services.
My initial thoughts would look something like
POST {CloseItemNumber} /item/1/close
POST {ItemNumber} /item/1?replace=true
POST {ItemNumber} /item/1
POST {ItemNumber} /item/1 i.e. same DTO/service different values.
Where ItemNumber and CloseItemNumber are separate Request DTOs and services.
Designing Service APIs
I prefer to structure my services around 'resources/nouns' and design my service APIs as actions that apply operations to them.
If the operation requires more information than storing the Resource DTO I would create a separate service with the additional metadata.
i.e. Here's how I would convert Amazons 'RPC' service to be more REST-ful:
https://ec2.amazonaws.com/?Action=AttachVolume
&VolumeId=vol-4d826724
&InstanceId=i-6058a509
&Device=/dev/sdh
&AUTHPARAMS
Into how I prefer to write it:
POST https://ec2.amazonaws.com/volumes/vol-4d826724/attach
FormData: InstanceId=i-6058a509&Device=/dev/sdh&AUTHPARAMS
Which would still use an explicit AttachVolume Request DTO.
Another example I use to showcase the different between WCF RPC and ServiceStack's coarse-grained message-based approach is in: https://gist.github.com/1386381
Difference between an RPC-chatty and message-based API:
This is a typical API that WCF encourages:
public interface IService
{
Customer GetCustomerById(int id);
Customer[] GetCustomerByIds(int[] id);
Customer GetCustomerByUserName(string userName);
Customer[] GetCustomerByUserNames(string[] userNames);
Customer GetCustomerByEmail(string email);
Customer[] GetCustomerByEmails(string[] emails);
}
This is an equivalent message-based API we encourage in ServiceStack:
public class Customers {
int[] Ids;
string[] UserNames;
string[] Emails;
}
public class CustomersResponse {
Customer[] Results;
}
Note: If you want your same services to support a both SOAP and a REST-based API, you will need to structure your services slightly differently to overcome SOAP's limitation of tunnelling all operations through HTTP POST.
Problem i still have when deciding to switch from 1 chatty RPC api to a REST api is that instead of having several functions easy to maintain, i find myself with either 2 solutions :
multiplying DTOs and services that makes internal code for the services being chatty and complex
or
putting into a single route (OnGet) all the code managing the service but this way i have to parse the different parameters to 'discover' which parameters have been requested (to simplify instead of having multiple simple functions with pre-defined parameters i now have only one function that has to determine which parameters are meaningful ... but that is VERY hard to maintain - code is more complex to me now).
In the proposed solution to solve GetCustomerById, GetCustomersByEmails etc. the point to me is that instead of having simple queries, we now have to dynamically construct the query based on the filled parameters - that can make the code tricky and hard to maintain - having to manage possible combinations of multiple parameters - some combinations not being possible too.
Feeling little bit sad about that as i REALLY don't like WCF at all.
Mixing WCF and REST seems summum of the complexity - the worst for me (complexity of defining a REST api + complexity of WCF).
Are my feelings shared or did i miss something ?

Resources