What kind of testing is this? Is the following considered end to end testing or is it an integration testing or a system testing? If not if you can elaborate on the types of testing in context of the code example.
I'm basically calling the endpoint on localhost and asserting on the status/output.
let assert = require("assert");
let http = require("http");
describe("EndToEndTesting", function() {
describe("GET /users", function() {
it("should return list of users", function(done) {
http.request({
method: "GET",
hostname: "localhost",
port: 3000,
path: "/users"
}, function(response){
let buffers = [];
response.on("data", buffers.push.bind(buffers));
response.on("end", function(){
let body = JSON.parse(Buffer.concat(buffers).toString());
assert(response.statusCode == 200);
});
}).on("error", console.error).end();
});
}
}
Interesting question, unfortunately the answer can depend on a number of things.
Let's start with a few definitions:
End to End testing is testing the application from start to finish. I.e. a complete flow that a user would be expected to do.
Integration testing is testing a group of components together as a single item.
System testing is testing the system as a whole against the requirements.
So the first question to consider, is this test checking a complete flow that a user would be expected to achieve?
This does mean you need to define who the user is. If for example you are building an API that is then re-used by other developers your user as the author of the api will be different to the end user of the solutions integrating your api.
My best guess is that this isn't an end to end test. Looking at the test it is a web request to get a list of users. If you are building a Web UI that can list confidential information I would expect this to require the person accessing the list to be logged in. Therefore the end to end test would include logging into the system.
So, next question, what components are being tested? Hard to tell from the test code but I am guessing the data is stored in database somewhere. So yes this is an integration test as it checks the interaction between the web component and the database.
Finally, is this a system test. Probably yes as I cannot see any evidence of this not being run against the system as a whole. Also, assuming the requirement of your solution is to be able to list the users then it is testing the required functionality.
So, in this case I believe it could either be a system or an integration test. Unfortunately based on the definitions of these test types there is often an overlap where tests could be classified as either. So it's ultimately up to you whether you call it a system or an integration test.
Related
I'm developing a rest API that, once you've bought a paid plan and received an apiKey, you can create a maximum of a certain number of apps depending on the paid plan chosen. I'm using node.js and to handle requests I'm using HTTPS module like this:
https.createServer(options, (req, res) => {
res.writeHead(200);
req.on('data', function (data) {
let command = data.toString();
var cmd = utils.getCommand(command);
var cmdResult = "";
switch (cmd.method) {
case 'SIGNIN':
cmdResult = auth.signin(cmd.parameters[0], cmd.parameters[1], cmd.parameters[2]);
break;
case 'LOGIN':
cmdResult = auth.login(cmd.parameters[0], cmd.parameters[1], cmd.parameters[2]);
break;
}
res.end(result.getResult(cmdResult));
});
}).listen(11200);
For debugging I'm using curl like this:
curl -X POST 'SIGNIN|username|password|apikey' https://mycurrentipaddress:11200
The above command performs a signin to add a new app that using the API; the authentication work with the apiKey
the API will offer an authentication service, a noSQL DB based on JSON and push notifications.
Was my idea and its current realization conceptually correct? or it doesn't make sense?
No, that isn't REST, you should read this for a better understanding of that pattern.
That doesn't mean it won't work, but it's not a conventional pattern, which I think is what you're looking for.
There are any number of tutorials on the internet on how to write a RESTful API, just take the time to follow some. Also, I'd also strongly suggest starting with a framework like ExpressJS rather than the Node standard libraries, you'll have plenty of opportunity to learn those in the future, starting with a framework is a much better path for learning, imo.
I'm new to TDD and I wrote a few test functions that check the sign up and deletion of the user , but before each running i go to the database and delete the user before testing the sign up and i go to the database to put a dummy user info before deletion so my question is how does this thing run in actual production environment , like every time I want to run the tests , I go to the database and make all these modification , what if user signed up with the below credentials then the test would return 200 ?? (i use jest with nodejs e2e)
describe("given user is not found", () => {
it("should return 404", async () => {
await request(app)
.post("/api/v1/auth/signIn")
.send({
email: "s#gmail.com",
password: "s",
})
.expect(404);
});
});```
Other opinions are available but here's what I'd do.
Let's assume your backend is in java. Before I wrote any code I would have a test that called the API endpoint with the same input and expected a 404.
Some will tell you to mock the database with a library like Mockito but I don't think that's necessary. For our purposes a database is nothing but a map. A map takes an id and returns an object. So make an interface that describes interactions with your database, saveUser(), loadUser() - that kind of thing. Implement the interface with your real implementation. And implement it with your test one which is just a class with those same methods and a map for actually doing real work. This is called a fake.
Your api could return a 404 if the user is not found or a 200 if it is. Your test is in the backend and its quicker than it was before because you're not hitting a real database.
As far as your example goes I really don't think you need to set anything up in the database at all. But if you absolutely must then you could have an endpoint which tears down the database setup and sets it up again, which is executed at the start of the test. Or a database image that you stand up just for the test. Both are probably overkill.
We are running a microservice architecture and want to set up contract testing in our project. Our consumers do not know which request is handled by which microservice. We want our microservices to select the interactions from the pacts that they should participate in.
Example:
Consumer A writes a test which is testing POST /users.
Consumer A writes a second test for POST /users with different parameters.
Consumer A writes a test for GET /users/$userId.
Consumer A writes a test for GET /articles/$articleId.
Microservice A handles all POST /users requests.
Microservice B handles all GET /users/$userId requests.
Microservice C handles all GET /articles/$articleId requests.
All of the consumer tests only have a single request in their interactions.
We want to put provider tests next to the Microservices. Each microservice should only test the endpoints that it is capable of handling. In this scenario, Microservice A would test all of the POST /users contracts. Microservice B would select the GET /users/$userId contracts and so on.
Is there a way to do so with pactflow.io and nodejs bindings for pact?
Edit: Added the architecture diagram:
No, there is no such in-built feature in Pact that supports that use case.
We've discussed the possibility of publishing expectations for messages this way, but not HTTP (because this is a bit more unusual, unlike message queues like Kafka where there is usually more indirection).
Are you using some form of dynamic API gateway or something?
One challenge you'll face is reverse engineering out the requests themselves in a reliable manner.
Ideas
The only suggestion I have would be to have a proxy on the provider side test that was aware of the different endpoints, and would redirect the requests to the correct provider. But then state handling gets difficult.
You can of course also manually fetch the pacts, and split them, but you'll lose a lot of value that pact has.
I'm not sure if the consumers not knowing about the providers is more of a philosophical thing, a practical thing or otherwise, but obviously the simplest solution is probably making the consumers aware of their providers.
Raise a Feature Request
Perhaps stating your use case more clearly and requesting a feature at https://pact.canny.io/ might be worthwhile, to see how relevant your use case is to the broader community and if it would be worth implementing.
We have found a solution to our particular problem:
The consumer does not know which service acts as a provider, but it knows the (HTTP method, URL) tuple that it calls.
The microservice knows every (HTTP method, URL) that it is responsible for.
We define a provider as the tuple (HTTP method, URL). As a result, a consumer contains many tests for many providers and a microservice also contains many tests for many providers.
Something like this in node.js for the consumer:
const consumer = "MyConsumer";
const providerGetArticles = new Pact({ consumer, provider: 'GET-articles', ... });
const providerGetArticlesArticleId = new Pact({ consumer, provider: 'GET-articles-:articleId', ... });
const providerPostUsers = new Pact({ consumer, provider: 'POST-users', ... });
const providerGetUsers = new Pact({ consumer, provider: 'GET-users', ... });
const providerGetUsersUserId = new Pact({ consumer, provider: 'GET-users-:userId', ... });
providerGetArticles.setup().then(() => {
providerGetArticles.addInteraction(
{
withRequest: { method: 'GET', path: '/articles' },
...
providerGetArticlesArticleId.setup().then(() => {
providerGetArticlesArticleId.addInteraction(
{
withRequest: { method: 'GET', path: '/articles/12345' },
...
providerPostUsers.setup().then(() => {
providerPostUsers.addInteraction(
{
withRequest: { method: 'POST', path: '/users' },
...
And like this for a microservice that handles GET /articles and GET /articles/:articleId
new Verifier({ provider: 'GET-articles', ... }).verifyProvider()...
new Verifier({ provider: 'GET-articles-:articleId', ... }).verifyProvider()...
We can now start the single microservice in isolation and run the provider tests.
For over a year we've seen interesting patterns that don't always rear themselves but on occasion repeat and we've never been able to figure out why and I'm hoping someone can make sense of it. It may be our approach, it may be the environment (node 8.x & koa), it may be a number of things.
We make two async calls in parallel to our dependencies using the request-promise module.
Simplified code of a single api dependency:
const httpRequest = require("request-promise");
module.exports = function (url) {
const requestOptions = {
uri: ...,
json: true,
resolveWithFullResponse: true
}
return httpRequest(requestOptions)
.then(response => {
status = response.statusCode;
tmDiff = moment().diff(tmStart);
return createResponseObject({
status,
payload: response.body,
})
})
.catch(err => { .... };
});
};
Parallel calls:
const apiResponses = yield {
api1: foo(ctx, route),
api2: bar(ctx)
};
Yet we've seen situations in our response time charts where if 1 is slow, latency seems to follow the other separate service. It doesn't matter what services they are, the pattern has been noticed across > 5 services that may be called in parallel. Does anyone have any ideas what could be causing the supposed latency?
If the latency is caused by a temporarily slowed network connection, then it would be logical to expect both parallel requests to feel that same effect. ping or tracert during the slowdown might give you useful diagnostics to see if it's a general transport issue. If your node.js server (which runs Javascript single threaded) was momentarily busy doing something else with the CPU (serving another request, garbage collecting, etc...), then that would affect the apparent responsiveness of API calls just because it took a moment for node.js to get around to processing the waiting responses.
There are tools that monitor the responsiveness of your own http server on a continual basis (you can set whatever monitoring interval you want). If you have a CPU-hog somewhere, those tools would show a similar lag in responsiveness. There are also tools that monitor the health of your network connection which would also show a connectivity glitch. These are the types of monitoring tools that people whose job it is to maintain a healthy server farm might use. I don't have a name handy for either one, but you can presumably find such tools by searching.
I have a Loopback API with a model Student.
How do I write unit tests for the node API methods of the Student model without calling the REST API? I can't find any documentation or examples for testing the model through node API itself.
Can anyone please help?
Example with testing the count method
// With this test file located in ./test/thistest.js
var app = require('../server');
describe('Student node api', function(){
it('counts initially 0 student', function(cb){
app.models.Student.count({}, function(err, count){
assert.deepEqual(count, 0);
});
});
});
This way you can test the node API, without calling the REST API.
However, for built-in methods, this stuff is already tested by strongloop so should pretty useless to test the node API. But for remote (=custom) methods it can still be interesting.
EDIT:
The reason why this way of doing things is not explicited is because ultimately, you will need to test your complete REST API to ensure that not only the node API works as expected, but also that ACLs are properly configured, return codes, etc. So in the end, you end up writing 2 different tests for the same thing, which is a waste of time. (Unless you like to write tests :)