Play Scala and thread safety - multithreading

The project is written using Play framework and Scala language.
I have implemented compile time dependency.
I have followed this example from Play:
https://github.com/playframework/play-scala-compile-di-example
Looking at the MyApplicationLoader.scala:
import play.api._
import play.api.routing.Router
class MyApplicationLoader extends ApplicationLoader {
private var components: MyComponents = _
def load(context: ApplicationLoader.Context): Application = {
components = new MyComponents(context)
components.application
}
}
class MyComponents(context: ApplicationLoader.Context)
extends BuiltInComponentsFromContext(context)
with play.filters.HttpFiltersComponents
with _root_.controllers.AssetsComponents {
lazy val homeController = new _root_.controllers.HomeController(controllerComponents)
lazy val router: Router = new _root_.router.Routes(httpErrorHandler, homeController, assets)
}
and the following line of code:
lazy val homeController = new _root_.controllers.HomeController(controllerComponents)
my understanding is that there is only one instance of HomeController created the first time HomeController is called.
And that instance lives as long as the application lives. Are these statements correct?
The HomeController in my application looks like that:
class HomeController{
val request = // some code here
val workflowExecutionResult = Workflow.execute(request)
}
So Workflow is of type object and not class.
The Workflow looks like that:
object Workflow {
def execute(request: Request) = {
val retrieveCustomersResult = RetrieveCustomers.retrieve()
// some code here
val createRequestResult = CreateRequest.create(request)
// some code here
workflowExecutionResult
}
}
So Workflow calls a few domain services and each domain service is of type object and not class.
All values inside the domain services are immutable, I am using vals everywhere.
Is this enough to ensure thread safety?
I am asking as I'm used to writing C# Web APIs where a HomeController would look like that:
class HomeControllerInSeeSharpProject{
// some code here
var request = new Request() // more code here
var workflow = new WorkflowInSeeSharpProject()
var workflowExecutionResult = workflow.execute(request)
}
and a Workflow would look like that:
public class WorkflowInSeeSharpProject {
public execute(Request request) {
var retrieveCustomers = new RetrieveCustomers()
var retrieveCustomersResult = retrieveCustomers.retrieve()
// some code here
var createRequest = new CreateRequest()
var createRequestResult = createRequest.create(request)
// some code here
return workflowExecutionResult
}
}
So in a C# project every time a HomeControllerInSeeSharpProject is called a new instance of WorkflowInSeeSharpProject is created and all the domain services
are also newed-up and then I can be sure that state cannot be shared between separate threads. So I am afraid that because my Scala Workflow
and domain services are of type object and not class that there could be a situation where two requests are sent into the HomeController
and state is shared between those two threads.
Can this be the case? Is my application not thread safe?
I have read that objects in Scala are not thread safe since there is only single instance of them. However I have also read that although
they are not thread safe using vals will make the application thread safe...
Or maybe Play itself has a way to deal with that problem?

Because your are using compile time dependency injection, you control the number of instances created, and in your case HomeController is created only once. As requests come in, this single instance will be shared between threads so indeed you have to make sure it is thread-safe. All the dependencies of HomeController will also need to be thread-safe, thus object Workflow has to be thread-safe. Currently, Workflow is not publicly exposing any shared state, so it is thread-safe. In general, val definitions within object are thread-safe.
In effect HomeController is behaving like a singleton and avoiding singletons could be safer. For example, by default Play Framework uses Guice dependency injection which creates a new controller instance per request as long as it is not a #Singleton. One motivation is there is less state to worry about regarding concurrency protection as suggested by Nio's answer:
In general, it is probably best to not use #Singleton unless you have
a fair understanding of immutability and thread-safety. If you think
you have a use case for Singleton though just make sure you are
protecting any shared state.

Related

Mikro-orm inter-service transactions in NestJS

I am evaluating Mikro-Orm for a future project. There are several questions I either could not find an answer in the docs or did not fully understand them.
Let me describe a minimal complex example (NestJS): I have an order processing system with two entities: Orders and Invoices as well as a counter table for sequential invoice numbers (legal requirement). It's important to mention, that the OrderService create method is not always called by a controller, but also via crobjob/queue system. My questions is about the use case of creating a new order:
class OrderService {
async createNewOrder(orderDto) {
const order = new Order();
order.customer = orderDto.customer;
order.items = orderDto.items;
const invoice = await this.InvoiceService.createInvoice(orderDto.items);
order.invoice = invoice;
await order.persistAndFlush();
return order
}
}
class InvoiceService {
async create(items): Invoice {
const invoice = new Invoice();
invoice.number = await this.InvoiceNumberService.getNextInSequence();
// the next two lines are external apis, if they throw, the whole transaction should roll back
const pdf = await this.PdfCreator.createPdf(invoice);
const upload = await s3Api.uplpad(pdf);
return invoice;
}
}
class InvoiceNumberService {
async getNextInSequence(): number {
return await db.collection("counter").findOneAndUpdate({ type: "INVOICE" }, { $inc: { value: 1 } });
}
}
The whole use case of creating a new order with all subsequent service calls should happen in one Mikro-Orm transaction. So if anything throws in OrderService.createNewOrder() or one one of the subsequently called methods, the whole transaction should be rolled back.
Mikro-Orm does not allow the atomic update-increment shown in InvoiceNumberService. I can fall back to the native mongo driver. But how do I ensure the call to collection.findOneAndUpdate() shares the same transaction as the entities managed by Mikro-Orm?
Mikro-Orm needs a unique request context. In the examples for NestJS, this unique context is created at the controller level. In the example above the service methods are not necessarily called by a controller. So I would need a new context for each call to OrderService.createNewOrder() that has a lifetime scoped to the function call, correct? How can I acheive this?
How can I share the same request context between services? In the example above InvoiceService and InvoiceNumberService would need the same context as OrderService for Mikro-Orm to work properly.
I will start with the bad news, mongodb transactions are not yet supported in MikroORM (athough they will land within weeks probably, already got the PoC implemented). You can subscribe here for updates: https://github.com/mikro-orm/mikro-orm/issues/34
But let me answer the rest as it will then apply:
You can use const collection = (em as EntityManager<MongoDriver>).getConnection().getCollection('counter'); to get the collection from the internal mongo connection instance. You can also use orm.em.getTransactionContext() to get the current trasaction context (currently implemented only in sql drivers, but in future this will probably return the session object in mongo).
Also note that in mongo driver, implicit transactions won't be enabled by default (it will be configurable though), so you will need to use explicit transaction demarcation via em.transactional(...).
The RequestContext helper works automatically. You just register it as a middleware (done automatically in the nestjs orm adapter) and then your request handler (route/endpoint/controller method) is ran inside a domain that shares the context. Thanks to this, all services in the DI can share singleton instances of repositories, but they will automatically pick the right context from the domain.
You basically have this automatic request context, and then you can create new (nested) contexts manually via em.transactional(...).
https://mikro-orm.io/docs/transactions/#approach-2-explicitly

How can I unit test ODataQuery and ODataQueryBuilder?

How can I unit test this code?
private ODataQueryResult buildAndExecuteQuery(String path String entity,
String sapClient, String sapLanguage) {
ODataQuery query = ODataQueryBuilder
.withEntity(path, entity)
.withHeader("sap-client", sapClient, true)
.withHeader("sap-language", sapLanguage, true)
.withoutMetadata()
.build();
return query.execute();
}
More precisely: How can I verify that my code calls all the right functions, for example does not forget to call withoutMetadata or set the required headers?
Unfortunately, ODataQueryBuilder and ODataQuery are classes, not interfaces. which makes mocking tricky. ODataQueryBuilder is even final, disabling mocking completely. Also, the chain starts with a static method withEntity which can also not be mocked.
Are there helpers that allow me to spy on behavior or mock data, similar to the MockUtil described in https://blogs.sap.com/2017/09/19/step-12-with-sap-s4hana-cloud-sdk-automated-testing/?
You are right, the structure of those classes make them hard to mock, but as they are part of the "SAP Cloud Platform SDK for Service Development" we have no way to change them.
Other approaches might be:
If you want to stay with the Unit Test approach you might want to have a look at https://github.com/powermock/powermock. This would allow you to mock final and static classes and methods. However, I have never used it personally, so I'm not sure how easy/comfortable to use it is.
If you also would see an integration test suiting you could consider using http://wiremock.org/docs/getting-started/. With that you would be able to setup a "Mock Server", preparing responses for defined requests and with that verify the content of any HTTP call made by your test.
We use WireMock in the SAP Cloud SDK and also provide some integration into our SDK via the MockUtil contained in our testutil-core module.
I hope this helps a bit!
Partial solution is to simulate executing the ODataQuery on a mock HttpClient.
First, the original method needs to be taken apart into two independent parts, one for building the query, the other for executing it. This is good design anyway, so no big problem:
private ODataQuery buildQuery(String path, String entity,
String sapClient, String sapLanguage) {
return ODataQueryBuilder
.withEntity(path, entity)
.withHeader("sap-client", sapClient, true)
.withHeader("sap-language", sapLanguage, true)
.withoutMetadata()
.build();
}
private ODataResponse executeQuery(ODataQuery query) {
return query.execute();
}
The buildQuery method can now be tested as follows:
#Test
public void addsSapLanguageToHeader() throws ODataException, IOException {
ODataQuery query = cut.buildQuery("api/v2", "business-partners", "", "fr");
HttpUriRequest request = getRequest(query);
assertContainsHeader(request, "sap-language", "fr");
}
The method getRequest produces a fake HttpClient that stubs all methods required to get query.execute(httpClient) to work. It stores the actual request and returns it for further inspection. Sample implementation with Mockito:
private HttpUriRequest getRequest(ODataQuery query) throws ODataException, IOException {
// stub methods to make code work
HttpResponse response = mock(HttpResponse.class);
when(httpClient.execute(any())).thenReturn(response);
StatusLine statusLine = mock(StatusLine.class);
when(response.getStatusLine()).thenReturn(statusLine);
HttpEntity entity = mock(HttpEntity.class);
when(response.getEntity()).thenReturn(entity);
InputStream inputStream = new ByteArrayInputStream("".getBytes(StandardCharsets.UTF_8));
when(entity.getContent()).thenReturn(inputStream);
Header[] headers = new Header[0];
when(response.getAllHeaders()).thenReturn(headers);
// simulate the execution of the query
query.execute(httpClient);
// grab the original argument from the mock for inspection
ArgumentCaptor<HttpUriRequest> captor = ArgumentCaptor.forClass(HttpUriRequest.class);
verify(httpClient).execute(captor.capture());
HttpUriRequest request = captor.getValue();
return request;
}
This solution is far from perfect, of course.
First, the amount of code needed to make this work alone shows how fragile this test will be over time. Whenever the CloudSDK decides to add a method or validation to the call sequence, this test will break. Note also that the test is invasive, by testing a private method, while gold standards say we should test only public methods.
Second, the method executeQuery can still not be tested. Execution paths also differ, because the test code uses the .execute(httpClient) variant to run the query, while the original code uses the .execute(destinationName) variant. The two happen to share code, but this may change over time.

should I open/close different Postgres connections in one node endpoint? making this work with OOP

I'm setting up the ability for my node server to load up the proper information from my DB (postgres) to render a certain client View. I'm currently refactoring my server code to follow an Object Oriented Approach with class constructors.
I currently have it so that Readers are a class of functions that are responsible for, well, running read queries on my database. I have inherited classes like MainViewReader and MatchViewReader, and they all inherit from a "Reader" class which instantiates a connection with postgres using the pg-promise library.
The issue with this is that I can't use two view readers or they will be opening up duplicate connections, therefore I am finding myself writing redundant code. So I believe I have two design choices, and I was wondering what was more efficient:
Instead of setting the pattern to be by servlet view, instead set
the pattern to be by the table read using that class, ie
NewsTableReader" MatchTableReader. Pro's of this is that none of
the code is redundant and can be used in different servlets, Con's
is that I would have to end the connection to postgres on every
instance of the Reader class before instantiating a new one as such:
const NewsTableReader = NewsTableReader()
await NewsTableReader.close()
const MatchTableReader = MatchTableReader()
await MatchTableReader.close()
Just having view readers. Pro's is that this is only one persisting
connection, cons is that there is lots of redundant code if i'm
loading data from the same tables in different views, example:
const MatchViewReader = MatchViewReader()
await MatchViewReader.load_news()
await MatchViewReader.load_matches()
Which approach is going to affect my performance negatively the most?
You've correctly ascertained that you should not create multiple connection pools with the same connection options 1. But this doesn't have to influence the structure of your code.
You could create a global pool, and pass that to your Reader constructors, as a kind of Dependency injection:
class Reader {
constructor(db) {
this._db = db
}
}
class NewsTableReader extends Reader {}
class MatchTableReader extends Reader {}
const pgp = require('pg-promise')(/* library options */)
const db = (/* connection options */)
const newsTableReader = new NewsTableReader(db)
const matchTableReader = new MatchTableReader(db)
await newsTableReader.load()
await matchTableReader.load()
// await Promise.all([newsTableReader.load(), matchTableReader.load()])
An other way to go is to use the same classes with the extend event of the pg-promise library:
const pgp = require('pg-promise')({
extend(obj, dc) {
obj.newsTableReader = new NewsTableReader(obj);
obj.matchTableReader = new MatchTableReader(obj);
}
})
const db = (/* connection options */)
await db.newsTableReader.load()
await db.tx(async t => {
const news = await t.newsTableReader.load();
const match = await t.matchTableReader.load();
return {news, match};
});
The upside of the extend event is, that you can use all of the functionality (eg. transactions and tasks) provided by the pg-promise library across different models. The thing to keep in mind is, that it creates new objects on every db.task(), db.tx() and db.connect() call.

Why can't I change the scope of my object with ServiceStacks IoC?

Given the following code from my Configure method:
OrmLiteConnectionFactory dbFactory = new OrmLiteConnectionFactory(ConfigUtils.GetConnectionString("Oracle:FEConnection"), OracleOrmLiteDialectProvider.Instance);
container.Register<IDbConnectionFactory>(dbFactory)).ReusedWithin(ReuseScope.Request); // <== this does NOT work
// But these work
container.Register<IPreprocessorRepository>(c => new CachedPreprocessorRepository(dbFactory, c.Resolve<ICacheClient>())).ReusedWithin(ReuseScope.Default);
container.Register<IPreprocessor>(c => new DirectApiPreprocessor(c.Resolve<IPreprocessorRepository>(), c.Resolve<IValidator<LeadInformation>>())).ReusedWithin(ReuseScope.Default);
How can I make sure that the dbFactory instanciated is used in other registrations will per request?
Thank you,
Stephen
You can't change the scope of this:
container.Register<IDbConnectionFactory>(dbFactory)
.ReusedWithin(ReuseScope.Request);
Because you're only passing in an instance of an object, and not a factory function that the IOC would need to be able to instantiate instances of the object itself. All the IOC can do in this case is return the instance, making it a singleton.
To be able to change the scope you would need to register a delegate that can create an instance, i.e:
container.Register<IDbConnectionFactory>(c =>
new OrmLiteConnectionFactory(...))
.ReusedWithin(ReuseScope.Request);
But you never want to do this with any connection or client factories like IDbConnectionFactory or IRedisClientsManager since they're designed to be used as singletons.
i.e. They're thread-safe singleton factories used to create single client/connection instances:
using (var db = container.Resolve<IDbConnectionFactory>().Open())
{
//...
}
using (var redis = container.Resolve<IRedisClientsManager>().GetClient())
{
//...
}

How do I call my own service from a request/response filter in ServiceStack?

My problem is...
...I have a DTO like this
[Route("/route/to/dto/{Id}", "GET")]
public class Foo : IReturn<Bar>
{
public string Id { get; set; }
}
and need to call the service that implements the method with this signature
public Bar Get(Foo)
from a request and/or response filter. I don't know what class implements it (don't want to need to know). What I need is something like the LocalServiceClient class in the example below:
var client = new LocalServiceClient();
Bar bar = client.Get(new Foo());
Does this LocalServiceClient thing exists? JsonServiceClient has a pretty similar interface, but using it would be inneficient (I need to call my own service, I shouldn't need an extra round-trip, even to localhost, just to do this).
I'm aware of ResolveService method from Service class, but it requires me to have a service instance and to know what class will handle the request.
I think this LocalServiceClient is possible because I have all the data that a remote client (e.g. JsonServiceClient) needs to call the service - request DTO, route, verb - but couldn't find how to do it. Actually, it should be easier to implement than JsonServiceClient.
JsonServiceClient would do it, but there must be a better way, using the same request context.
What I want to do (skip this if you're not curious about why I'm doing this)
Actually, my DTOs are like this:
[EmbedRequestedLinks]
[Route("/route/to/dto/{Id}", "GET")]
public class MyResponseDto
{
public string Id { get; set; }
public EmbeddableLink<AResponseDto> RelatedResource { get; set; }
public EmbeddableLink<AnotherResponteDto> AnotherRelatedResource { get; set; }
}
EmbedRequestedLinksAttribute is a request/response filter. This filter checks if there is a query argument named "embed" in the request. If so, the filter need to "embed" the comma-separated related resources referenced by the argument into the response to this request. EmbeddableLink<T> instances can be obtained by using extension methods like these:
1) public static EmbeddableLink<T> ToEmbeddableLink<T>(this IReturn<T> requestDto)
2) public static EmbeddableLink<T> ToEmbeddableLink<T>(this T resource)
Assume a client places this request:
GET /route/to/dto/123456?embed=relatedResource HTTP/1.1
The service that will handle this request will return an instance of MyResponseDto with EmbeddableLinks created using signature (1). Then my response filter will see the embed query argument and will call the Get method of the appropriate service, replacing the RelatedResource with another instance of EmbeddableLink, this time created using extension method (2):
var client = new LocalServiceClient();
response.RelatedResource = client.Get(response.RelatedResource.RequestDto)
.ToEmbeddableLink();
The serialization routine of EmbeddableLink takes care of the rest.
In case an embeddable link is not included in the embed list the serialization routine will call the extension method ToUrl (provided by ServiceStack), that takes a verb and converts a request DTO into a URL. In this example the client will get this response:
{
"id": "9asc09dcd80a98",
"relatedResource": { "id": "ioijo0909801", ... },
"anotherRelatedResource":
{
"$link": { "href": "/route/to/another/dto/1sdf89879s" }
}
}
I know the creators of ServiceStack think that polymorphic request/responses are bad things but this case seems OK to me because I'm not creating services, instead I'm extending the framework to help me create services the way I (and possibly other users of ServiceStack) need. I'm also creating other hypermedia extensions to ServiceStack. (I hope my boss allow me to publish these extensions on github)
If you really want to do this then look the source code for ServiceStack. Look at the ServiceManager and ServiceController. These classes are responsible for registering and resolving services. You might even be able to use reflection to create services on the fly with the static EndpointHost.Metadata like so:
var operation = EndpointHost.Metadata.Operations
.FirstOrDefault(x => x.RequestType == typeof(Person));
if (operation != null)
{
var svc = Activator.CreateInstance(operation.ServiceType);
var method = operation.ServiceType.GetMethod("Get");
var response = method.Invoke(svc, new[] { new Person() });
}
This kinda works but you will get NULL exceptions if there is other code calling
var httpRequest = RequestContext.Get<IHttpRequest>();
But I would not suggest this.
Instead if you create your own Business Service classes that do all the CRUD operations (POST/PUT/GET ect). Then make the ServiceStack Services thin wrappers over them. Now you can call your own services whenever you want without worrying about the HTTP Request and ServiceStack. Only use the ServiceStack Service when you are dealing with HTTP requests
You can call the static AppHostBase.Resolve() method as demonstrated here, calling a SeviceStack Service from an MVC controller:
var helloService = AppHostBase.Resolve<HelloService>();
helloService.RequestContext = System.Web.HttpContext.Current.ToRequestContext();
var response = (HelloResponse)helloService.Any(new HelloRequest { Name = User.Identity.Name });
However, I would take #kampsj's approach of making your ServiceStack services a thin wrapper around your application service classes and only deal with HTTP/Session specific stuff in the ServiceStack service.

Resources