In ServiceStack, how can I do integration testing with multiple endpoints? - servicestack

We're using ServiceStack for a client project with several distinct problem domains, which we'd prefer to keep separated. We've developed a testing framework that spins up an AppHostHttpListener and sets up an in-memory test database using SQLite and DbUp - but, as you know, a test session's AppDomain can only have one AppHost at a time.
On the other hand, we have two different AppHosts that we want to deploy, let's call them Foo and Bar. Foo accepts requests and passes them to Bar, so Foo -> Bar, and Bar is standalone.
We want to be able to write end-to-end integration tests that exercise instances of both Foo and Bar. With ServiceStack's limitation of one AppHost per AppDomain, we seem to have the following options:
Spin up a new AppDomain for each AppHost inside the test session and control their lifetime across a MarshallByRef boundary. Not sure how this would perform sharing 'test connections' between AppHosts though.
Mock out the external service. This is the textbook answer, but these systems are critical enough that we'd like to see when changes to one service break the other.
Make the endpoints pluggable so that they can be loaded in the same AppHost for testing, but under different sub-URLs. The way I see it, this would require the endpoints to share AuthFeature, IDbConnectionFactory etc, so we would lose that flexibility.
My questions to you are:
Which option would you go with?
Can you recommend another approach that would enable us to test integration of multiple ServiceStack endpoints in memory?

The only way to test multiple Services in memory is to combine them in the same Test AppHost which will only need the to register the dependencies the integration tests are testing. In memory Integration tests normally have a Custom AppHost built to task, the AppHost isn't part of the test.
The alternative is to use IIS Express and start instances of the endpoints used in the integration test before running them.

Related

What is a good way to to mock and spy on an API dependency in Node testing?

I have a question about how best to do end-to-end/integration testing without having to mock an entire API. I've not found any great solutions and I'm starting to wonder if I'm thinking about this the wrong way.
My situation
I have a web client that I want to test which relies heavily on a REST API that is part of our larger system. For the most part, testing against an instance of the real API service seems like the right thing to do in our end-to-end testing scenario, but in some cases (e.g. provoking errors, empty lists et c) it would be easier to mock portions of the API than to go in and change the actual state of the API service.
In other cases, I don't need to mock the API, but I do need to confirm that requests were made as expected, essentially a spy, but on an API.
My idea for a solution
I imagined the best way to solve this would be an HTTP proxy controlled by the test suite, sitting between the system-under-test and the API service. The test suite would then:
Configure the system-under-test to use the API proxy
For each test
Set up mocks only on relevant endpoints (the rest are proxied to the API service)
Exercise the system-under-test
Make assertions by reading spies from the proxy
Tear down/reset the mocks and spies afterwards
I have multiple use cases for this, one being Cypress end-to-end testing inside a browser, where monkeypatching or dependency injection is not possible, because significant portions of the system under test is not executed directly by the test-suite.
I found MockServer which seems to solve what I'm looking to achieve, but in a massive Java program that adds a lot of requirements (e.g. Java, which translates to CI costs) to an otherwise node-based environment.
EDIT: This image from MockServer illustrates one usage which matches my scenario:
My question(s)
Is what I'm considering a good approach to this type of situation? What is a good way (e.g. existing software) to solve this within the Node eco-system?

Performance testing with external dependencies

When performance testing in the microservices world (talking mainly load testing), what is your approach regarding external dependencies (APIs) your application relies on, but not owned/controlled by your team. In my case the external dependencies are owned by teams within the same company.So would you point to the corresponding "real" integration non-prod endpoints OR you would create stubs and mimic their response times in order to match production as much as possible?
First approach example: A back-end api owned by your team and calling an external api to verify a customer. Your team doesn't have control over the customer api, but you still point to their integration testing endpoint when running the load test.
Second approach example: A back-end api owned by your team calls a stub that sends a static response and mimics the response time of the external customer api.
I realise there are pros and cons of the two approaches, and one would favour over the other depending on the goals of the testing. But what is your preferred one? Shouldn't be necessarily a choice between the two mentioned above. Can be a completely different one.
It is important to identify the system (or application) under test. If you are measuring the performance of only your own microservice, then you can consider stubbing as an option.
However, performance test is typically done to assess the performance of the system as a whole. The intent is usually to emulate the latency in actual usage. The only way to model this somewhat accurately is to not stub and use the "real" integration end points. This approach has additional advantages as it can help you to identify potential system performance bottlenecks such as chained synchronous calls between your microservices (Service A calls B and B calls C and C calls D and etc). The tests can also be reused for Load testing.
In short, you would need to do both to ensure:
A microservice is performing within the SLA
The various microservices are performing within the SLA as a whole.

Mocking API responses with C# Selenium WebDriver

I am trying to figure out how (or even if) I can replace the API calls made by my app when running a Webdriver test to return stubbed output. The app uses a lot of components that are completely dependent on a time frame or 3rd party info that will not be consistent or reliable for testing. Currently I have no way to test these elements without using 'run this test if...' as an approach which is far from ideal.
My tests are written in C#.
I have found a Javascript library called xhr-mock which seems to kind of do what I want but I can't use that with my current testing solution.
The correct answer to this question may be 'that's not possible' which would be annoying but, after a whole day reading irrelevant articles on Google I fear that may be the outcome.
WebDriver tests are End to End, Black Box, User Interface tests.
If your page depends on an external gateway,
you will have a service and models to wrap that gateway for use throughout your system,
and you will likely already be referencing your models in your tests.
Given the gateway is time dependent, you should use the service consumed by your api layer in your tests as-well, and simply check that the information returned by the gateway at any time is displayed on the as page as you would expect it to be. You'll have unit tests to check the responses model correctly.
As you fear, the obligatory 'this may not be possible': Given the level of change your are subject to from your gateway, you may need to reduce your accuracy or introduce some form of refresh in your tests, as the two calls will arrive slightly apart.
You'll likely have a mock or stub api in order to develop the design, given the unpredictable gateway. It would then be up to you if you used the real or fake gateway for tests in any given environment. These tests shouldn't be run on production, so I would use a fake gateway for a ci-test environment and the real gateway for a manual-test environment, where BBT failures don't impact your release pipeline.

Calling Web API from Web Job or use a class?

I am working on creating a Web Job on Azure and the purpose is to handle some work load and perform server background tasks on my website.
My website has several Web API methods which are used by my website but I also want the Web Job to perform the same tasks as the Web API methods after they are finished.
My question is, should I get the Web Job to call this web API (if possible) or should I move the Web API code to a Class and have the Web API and also Web Job both call this class?
I just wondered what was normal practice here.
Thanks
I would suggest you put the common logic in a dll and have them both share that library instead of trying to get the webjob to call the webapi.
I think that will be the simple way to get what you want (plus it will help you keep things separated so they can be shared - instead of putting too much logic in your webapi).
I think it's players choice here. Both will run on the same instance(s) in Azure if you choose to deploy them that way. You can either reuse by dogfooding your API or reuse by sharing a class via a .dll. We started off mixed but eventually went with the dogfooding the API as the amount of Webjobs we are using got bigger/more complex. Here are a couple of reasons.
No coupling to the libraries/code used by the API
Easier to move the Web Job into its own solution(s) only dependent on the API and any other libraries we pick for it
Almost free API testing (Dogfooding via our own Client to the API)
We already have logging and other concerns wired up in the API (more reuse)
Both work though in reality, it really comes down to managing the dependencies and the size of app/solution is you are building.

Integration tests for single sign-on pages

How do you test pages with single sign-on (SSO) login during integration tests (for instance by using caybara or cucumber)? For a normal login, you would write a method which visits the login page, fills out the form, and submits it. This is a bit difficult if the login form comes from an external SSO server like Shibboleth or OpenAM/OpenSSO. How is it possible to write integration tests for pages protected by SSO?
A similar problem is integration testing with a separate search server (Solr or Sphinx). You would probably solve it by using some form of mocks or stubs. Can someone give a good example how to mock or stub a SSO for cucumber or capybara? If this is too difficult, then a comparable example for a search server would be helpful, too.
Integration testing of a SSO application is a
special case of a more general problem: testing
distributed applications. This is a difficult
problem and there does not seem to be a magic
bullet for it. There are various ways to combine
a set of different servers or services and test them
as a whole. The two extremes are
a) Test an instance of the whole system. You don't
need any mocks or stubs then, but you need
a complete, full-fledged setup of the entire stack. This includes
a running instance of every server involved.
For each test, setup the entire application stack,
and test the whole stack, i.e. test the
entire distributed system as a whole with all
the components involved, which is difficult
in general. This whole thing works only if each
components and all connections are working well.
b) Write an integration test for each component,
treat it as a black box, and cover the
missing connections by mocks and stubs.
In practice, this approach is more common for
unit testing, one writes tests for each
MVC layer: model, view, and controller
(view and controller often together).
In both cases, you have not considered
broken connections. In principle one
has to check for each external server/service
the following possibilities
is down
is up and behaves well
is up and and replies wrong
is up, but you send it wrong data
Basically, testing of distributed apps is difficult.
This is one reason why distributed applications are hard to develop.
The more parts and servers a distributed application has, the more difficult it is to setup many full-fledged environments like production, staging, test and development.
The larger the system, the more difficult the
integration testing becomes. In practice,
one uses the first approach and creates a small
but complete version of the whole application.
A typical simple setup would be App Server + DB Server + Search Server.
On your development machine, you would have
two different versions of a complete system:
One DB Server with multiple databases (development and test)
One Search Server with multiple indexes (development and test)
The common Ruby plugins for search servers (Thinking Sphinx for Sphinx
or Sunspot for Solr) support cucumber and integration
tests. They "turn on" the search server for certain portions of
your tests. For the code that does not use the search server,
they "stub" the server or mock out the connection to avoid unneeded
indexing.
For RSpec tests, it is possible
to stub out the authentication methods,
for example for a controller test by
before :each do
#current_user = Factory(:user)
controller.stub!(:current_user).and_return(#current_user)
controller.stub!(:logged_in?).and_return(:true)
end
It also works for helper and view tests, but
not for RSpec request or integration tests.
For cucumber tests, it is possible to stub
out the search server by replacing the connection
to the search server with a stub (for Sunspot and
Solr this can be done by replacing the Sunspot.session,
which encapsulates a connection to Solr).
This all sounds well, unfortunately it is a bit hard to
transfer this solution for a SSO Server. A typical minimal
setup would be App Server + DB Server + SSO Server.
A complete integration test would mean we have to setup one SSO Server with
multiple user data stores (development and test).
Setting up a SSO Server is already difficult enough,
setting up a SSO Server with multiple user data
stores is probably not a very good idea.
One possible solution to the problem is maybe somewhere in the
direction of Fakeweb. FakeWeb is a Ruby library written by
Blaine Cook for faking web requests. It lets you decouple
your test environment from live services. Faking the response
of a SSO server is unfortunately a bit hard.
Another possible solution I ended up using is to use a fake login, i.e.
add a fake login method that can be called within the integration
test. This fake login is a dynamic method only added during the
test (through a form of monkey patching). This is a bit messy, but
it seems to work.

Resources