Performance testing with external dependencies - performance-testing

When performance testing in the microservices world (talking mainly load testing), what is your approach regarding external dependencies (APIs) your application relies on, but not owned/controlled by your team. In my case the external dependencies are owned by teams within the same company.So would you point to the corresponding "real" integration non-prod endpoints OR you would create stubs and mimic their response times in order to match production as much as possible?
First approach example: A back-end api owned by your team and calling an external api to verify a customer. Your team doesn't have control over the customer api, but you still point to their integration testing endpoint when running the load test.
Second approach example: A back-end api owned by your team calls a stub that sends a static response and mimics the response time of the external customer api.
I realise there are pros and cons of the two approaches, and one would favour over the other depending on the goals of the testing. But what is your preferred one? Shouldn't be necessarily a choice between the two mentioned above. Can be a completely different one.

It is important to identify the system (or application) under test. If you are measuring the performance of only your own microservice, then you can consider stubbing as an option.
However, performance test is typically done to assess the performance of the system as a whole. The intent is usually to emulate the latency in actual usage. The only way to model this somewhat accurately is to not stub and use the "real" integration end points. This approach has additional advantages as it can help you to identify potential system performance bottlenecks such as chained synchronous calls between your microservices (Service A calls B and B calls C and C calls D and etc). The tests can also be reused for Load testing.
In short, you would need to do both to ensure:
A microservice is performing within the SLA
The various microservices are performing within the SLA as a whole.

Related

Call a method in the nest.js project with Camunda (looking for an approach)

Let's assume the following situation:
We have several webservices based on Nest.js technology
The services perform CRUD operations in the area of ​​their domain
The services do not have business logic (they can add, change, delete, return data, they know the relationships between entities, but also between domains (e.g. through Apollo Federation)
Everything works fine so far.
However, we face the problem of business processes, validation, business rules and everything that goes with it. So we have to code this logic somehow or use some engine (eg Camunda).
As far as I understand that Camunda can send requests from Service A to Service B in the BPMN process e.g. via HTTP.
But what if several activities are performed in the same service?
Isn't it better to make requests to the same service at the service level layer? Is it possible in Cmunda?
E.g.
WebService1 has a POST Customer/ endpoint which calls CustomerService.AddCustomer (data) and CustomerRoles.AddRole (data). Can we call CustomerRoles.AddRole in Camunda?
My question is mainly about node.js / nestjs.
Forgive me, but I don't think I can describe it more clearly :(
In general you can use Camunda not only at the highest orchestration layer, for the end-to-end business process, but also inside the micro service. Benefits include state management, error handling, retries, exception handling, possible compensation. (What happens if AddCustomer succeeds, but AddRole fails?).
There are orchestration vs choreography considerations. Latency requiremnts may also be relevant. I recommend these two reads, which illustrate the benefits/trade-offs and design decision well:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
and
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
Why don't you implement a little proof of concept and see what it could look like? If NextJS is your world, you may like to start with a Camunda 8 SaaS trila and https://github.com/camunda-community-hub/nestjs-zeebe#readme

How to classify services in microservices?

I am new in microservices. I am coming from monolithic background in current environment i have different kinds services for different purposes like search, file, email, notification. I have taken so many courses but in that the instructor separate each entity and make it's own database also create API for that(like separate shopping cart entity, product entity) it makes no sense, I am not getting what is real world use of microservices or how to make separate component to build it's own microservice.
Can anyone give Real Project example?
Thanks in advance
Read this and this. Also look here and here. I don't think that anyone will give a link to the real working project, so you can try this.
I am not getting what is real world use of microservices
mostly as you heard in all of those tutorials the microservices architecture leverage advantages of:
the smaller services are easy to maintain and develop
easily can scale specific services rather than the whole project(monolith). for example you scale service-1 to 4 instances that request traffic split into these 4 instance and service-2 to 2 instances and go on (load balance). and these services may distributed in to different servers and locations.
if one service failed to work it does not terminate the whole system since they are independent.
services can be reusable for other scenarios or features.
small team can works for each services and its easy to manage both project and development flow.
and also it suffer from disadvantages of
services are simple and small but all as a whole system is complex so designing part are very critical.
poor performance and it requires do some extras to improve the performance (different types of caching on different levels).
transactions are complex and its developments are time costly. imagine simple update should be projected to other services if its required and you have to consider failure and rollback strategy ( SAGA ).
how to make separate component to build it's own microservice
this is the most challenging part of microservices. you need deep study on Domain driven design DDD.
Decompose by subdomain
Decompose by Business Capabilities
Can anyone give Real Project example?
there are many projects the develop microservices with different patterns. I think you have to start your own and make your hands dirty.

API Architecture - Business logic tightly coupled to routes?

To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.

Questions pertaining to micro-service architecture

I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services

Mocking API responses with C# Selenium WebDriver

I am trying to figure out how (or even if) I can replace the API calls made by my app when running a Webdriver test to return stubbed output. The app uses a lot of components that are completely dependent on a time frame or 3rd party info that will not be consistent or reliable for testing. Currently I have no way to test these elements without using 'run this test if...' as an approach which is far from ideal.
My tests are written in C#.
I have found a Javascript library called xhr-mock which seems to kind of do what I want but I can't use that with my current testing solution.
The correct answer to this question may be 'that's not possible' which would be annoying but, after a whole day reading irrelevant articles on Google I fear that may be the outcome.
WebDriver tests are End to End, Black Box, User Interface tests.
If your page depends on an external gateway,
you will have a service and models to wrap that gateway for use throughout your system,
and you will likely already be referencing your models in your tests.
Given the gateway is time dependent, you should use the service consumed by your api layer in your tests as-well, and simply check that the information returned by the gateway at any time is displayed on the as page as you would expect it to be. You'll have unit tests to check the responses model correctly.
As you fear, the obligatory 'this may not be possible': Given the level of change your are subject to from your gateway, you may need to reduce your accuracy or introduce some form of refresh in your tests, as the two calls will arrive slightly apart.
You'll likely have a mock or stub api in order to develop the design, given the unpredictable gateway. It would then be up to you if you used the real or fake gateway for tests in any given environment. These tests shouldn't be run on production, so I would use a fake gateway for a ci-test environment and the real gateway for a manual-test environment, where BBT failures don't impact your release pipeline.

Resources