I have JSON API built with koa which I am trying to cover with integration tests.
A simple test would look like this:
describe("GET: /users", function() {
it ("should respond", function (done) {
request(server)
.get('/api/users')
.expect(200, done);
});
});
Now the issue comes when the actions behind a controller - lets say saveUser at POST /users - use external resources. For instance I need to validate the users phone number.
My controller looks like this:
save: async function(ctx, next) {
const userFromRequest = await parse(ctx);
try {
// validate data
await ctx.repo.validate(userFromRequest);
// validate mobile code
await ctx.repo.validateSMSCode(
userFromRequest.mobile_number_verification_token,
userFromRequest.mobile_number.prefix + userFromRequest.mobile_number.number
);
const user = await ctx.repo.create(userFromRequest);
return ctx.data(201, { user });
} catch (e) {
return ctx.error(422, e.message, e.meta);
}
}
I was hoping to be able to mock the ctx.repo on the request object but I can't seem to able to get a hold on it from test, which means that my tests are actually hitting the phone number verification service.
Are there any ways I could go around hitting that verification service ?
Have you considered using a mockup library like https://github.com/mfncooper/mockery?
Typically, when writing tests requiring external services, I mock the service client library module. For example, using mocha:
mockery = require('mockery');
repo = require('your-repo-module');
before(function() {
mockery.enable();
repo.validateSMSCode = function() {...};
mockery.registerMock('your-repo-module', repo);
}
This way, every time you require your-repo-module, the mocked module will be loaded rather than the original one. Until you disable the mock, obviously...
app.context is the prototype from which ctx is created from. You may
add additional properties to ctx by editing app.context. This is
useful for adding properties or methods to ctx to be used across your
entire app, which may be more performant (no middleware) and/or easier
(fewer require()s) at the expense of relying more on ctx, which could
be considered an anti-pattern.
app.context.someProp = "Some Value";
app.use(async (ctx) => {
console.log(ctx.someProp);
});
For your sample your re-define app.context.repo.validateSMSCode like this, assuming that you have following setup lines in your test:
import app from '../app'
import supertest from 'supertest'
app.context.repo.validateSMSCode = async function(ctx, next) {
// Your logic here.
};
const request = supertest.agent(app.listen())
After re-defining app.context.repo.validateSMSCode method that your will define in your test, will work, instead of original method.
https://github.com/koajs/koa/blob/v2.x/docs/api/index.md#appcontext
https://github.com/koajs/koa/issues/652
Related
I'm new to Next.js and I'm trying to understand the suggested structure and dealing with data between pages or components.
For instance, inside my page home.js, I fetch an internal API called /api/user.js which returns some user data from MongoDB. I am doing this by using fetch() to call the API route from within getServerSideProps(), which passes various props to the page after some calculations.
From my understanding, this is good for SEO, since props get fetched/modified server-side and the page gets them ready to render. But then I read in the Next.js documentation that you should not use fetch() to all an API route in getServerSideProps(). So what am I suppose to do to comply to good practice and good SEO?
The reason I'm not doing the required calculations for home.js in the API route itself is that I need more generic data from this API route, as I will use it in other pages as well.
I also have to consider caching, which client-side is very straightforward using SWR to fetch an internal API, but server-side I'm not yet sure how to achieve it.
home.js:
export default function Page({ prop1, prop2, prop3 }) {
// render etc.
}
export async function getServerSideProps(context) {
const session = await getSession(context)
let data = null
var aArray = [], bArray = [], cArray = []
const { db } = await connectToDatabase()
function shuffle(array) {
var currentIndex = array.length, temporaryValue, randomIndex;
while (0 !== currentIndex) {
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex -= 1;
temporaryValue = array[currentIndex];
array[currentIndex] = array[randomIndex];
array[randomIndex] = temporaryValue;
}
return array;
}
if (session) {
const hostname = process.env.NEXT_PUBLIC_SITE_URL
const options = { headers: { cookie: context.req.headers.cookie } }
const res = await fetch(`${hostname}/api/user`, options)
const json = await res.json()
if (json.data) { data = json.data }
// do some math with data ...
// connect to MongoDB and do some comparisons, etc.
But then I read in the Next.js documentation that you should not use fetch() to all an API route in getServerSideProps().
You want to use the logic that's in your API route directly in getServerSideProps, rather than calling your internal API. That's because getServerSideProps runs on the server just like the API routes (making a request from the server to the server itself would be pointless). You can read from the filesystem or access a database directly from getServerSideProps. Note that this only applies to calls to internal API routes - it's perfectly fine to call external APIs from getServerSideProps.
From Next.js getServerSideProps documentation:
It can be tempting to reach for an API Route when you want to fetch
data from the server, then call that API route from
getServerSideProps. This is an unnecessary and inefficient approach,
as it will cause an extra request to be made due to both
getServerSideProps and API Routes running on the server.
(...) Instead, directly import the logic used inside your API Route
into getServerSideProps. This could mean calling a CMS, database, or
other API directly from inside getServerSideProps.
(Note that the same applies when using getStaticProps/getStaticPaths methods)
Here's a small refactor example that allows you to have logic from an API route reused in getServerSideProps.
Let's assume you have this simple API route.
// pages/api/user
export default async function handler(req, res) {
// Using a fetch here but could be any async operation to an external source
const response = await fetch(/* external API endpoint */)
const jsonData = await response.json()
res.status(200).json(jsonData)
}
You can extract the fetching logic to a separate function (can still keep it in api/user if you want), which is still usable in the API route.
// pages/api/user
export async function getData() {
const response = await fetch(/* external API endpoint */)
const jsonData = await response.json()
return jsonData
}
export default async function handler(req, res) {
const jsonData = await getData()
res.status(200).json(jsonData)
}
But also allows you to re-use the getData function in getServerSideProps.
// pages/home
import { getData } from './api/user'
//...
export async function getServerSideProps(context) {
const jsonData = await getData()
//...
}
You want to use the logic that's in your API route directly in
getServerSideProps, rather than calling your internal API. That's
because getServerSideProps runs on the server just like the API routes
(making a request from the server to the server itself would be
pointless). You can read from the filesystem or access a database
directly from getServerSideProps
As I admit, what you say is correct but problem still exist. Assume you have your backend written and your api's are secured so fetching out logic from a secured and written backend seems to be annoying and wasting time and energy. Another disadvantage is that by fetching out logic from backend you must rewrite your own code to handle errors and authenticate user's and validate user request's that exist in your written backend. I wonder if it's possible to call api's within nextjs without fetching out logic from middlewars? The answer is positive here is my solution:
npm i node-mocks-http
import httpMocks from "node-mocks-http";
import newsController from "./api/news/newsController";
import logger from "../middlewares/logger";
import dbConnectMid from "../middlewares/dbconnect";
import NewsCard from "../components/newsCard";
export default function Home({ news }) {
return (
<section>
<h2>Latest News</h2>
<NewsCard news={news} />
</section>
);
}
export async function getServerSideProps() {
let req = httpMocks.createRequest();
let res = httpMocks.createResponse();
async function callMids(req, res, index, ...mids) {
index = index || 0;
if (index <= mids.length - 1)
await mids[index](req, res, () => callMids(req, res, ++index, ...mids));
}
await callMids(
req,
res,
null,
dbConnectMid,
logger,
newsController.sendAllNews
);
return {
props: { news: res._getJSONData() },
};
}
important NOTE: don't forget to use await next() instead of next() if you use my code in all of your middlewares or else you get an error.
Another solution: next connect has run method that do something like mycode but personally I had some problems with it; here is its link:
next connet run method to call next api's in serverSideProps
Just try to use useSWR, example below
import useSWR from 'swr'
import React from 'react';
//important to return only result, not Promise
const fetcher = (url) => fetch(url).then((res) => res.json());
const Categories = () => {
//getting data and error
const { data, error } = useSWR('/api/category/getCategories', fetcher)
if (error) return <div>Failed to load</div>
if (!data) return <div>Loading...</div>
if (data){
// {data} is completed, it's ok!
//your code here to make something with {data}
return (
<div>
//something here, example {data.name}
</div>
)
}
}
export default Categories
Please notice, fetch only supports absolute URLs, it's why I don't like to use it.
P.S. According to the docs, you can even use useSWR with SSR.
I'm walking through the Javascript demos of pg-promise-demo and I have a question about the route /api/users/:name.
Running this locally works, the user is entered into the database, but is there a reason this wouldn't be a POST? Is there some sort of advantage to creating a user in the database using GET?
// index.js
// --------
app.get('/api/users/:name', async (req, res) => {
try {
const data = (req) => {
return db.task('add-user', async (t) => {
const user = await t.users.findByName(req.params.name);
return user || t.users.add(req.params.name);
});
};
} catch (err) {
// do something with error
}
});
For brevity I'll omit the code for t.users.findByName(name) and t.users.add(name) but they use QueryFile to execute a SQL command.
EDIT: Update link to pg-promise-demo.
The reason is explained right at the top of that file:
IMPORTANT:
Do not re-use the HTTP-service part of the code from here!
It is an over-simplified HTTP service with just GET handlers, because:
This demo is to be tested by typing URL-s manually in the browser;
The focus here is on a proper database layer only, not an HTTP service.
I think it is pretty clear that you are not supposed to follow the HTTP implementation of the demo, rather its database layer only. The demo's purpose is to teach you how to organize a database layer in a large application, and not how to develop HTTP services.
In my sailsJS project i use a Service as an API wrapper to communicate with an external API.
For example I want to have a init function and pass parameters like baseUrl, apiKey and after that just call sendRequest(endpoint).
However today i figured out that a service cannot be instantiated but is available globally. Because SailsJS works async it just overrides the variables each time.
So my question in short: What's the right way to use a "class" with class variables, where i can have multiple instances, like i know them from the PHP world?
At first i want to instantiate my API wrapper with a project, that contains basic data such as baseUrl and apiKey.
const ApiService = ServiceBase.getApiService(project);
ApiService.getEndpoint(endpoint).then(async(response) => {
// handle response
}
Function get ApiService.
Here i try to set project variables.
getApiService: (project) => {
this.setConfigParam('apiKey', project.api_key);
this.setConfigParam('baseUrl', project.url);
return ShopApiService;
}
Function getEndpoint (Shortcut for GET Requests)
getEndpoint: async function(endpoint) {
if (ShopApiService.isTokenExpired()) {
await this.authorizeRequest();
}
return this.sendRequest(endpoint);
}
authorizeRequest (Sets accessToken for following requests)
authorizeRequest: async function() {
let response = await this.postRequest('auth/login', {apiKey: module.exports.config.apiKey});
if (response && response.statusCode == 200) {
module.exports.token.updated_at = Date.now();
module.exports.config.accessToken = response.body.accessToken;
return true;
}
return false;
}
My problem is that the asynchronous calls override variables like accessToken or baseUrl, so if the request to acquire the access token is finished - it may be for a different request.
I worked on a project with Spring Boot java framework where guys automated API docs generation. Every time you run BDD/Integration style tests, there was api blue print file created out from mocha tests. Then it ran generate-html-from-api blueprint. I liked this approach as it has two advantages:
1) API docs are always correct and up-to-date
2) saves time, because no need to write another documentation file (like apidoc).
Has anyone tried and has working example for node projects? I found api-doc-test plugin, however its documentation is limited. ? Ideally, I would like just to run:
mocha --recursive
Which would generate api-doc.html and place under test/tmp/.
I have looked at swagger but I really don't want to specify endpoint information twice and it would really be awesome just write once in BDD tests and have double result (tests + docs) at the same time.
https://github.com/stackia/test2doc.js
I'm working on this project, which enables generating documents (currently only API blueprint) from BDD tests, just exactly what you need.
Test code example:
const doc = require('test2doc')
const request = require('supertest') // We use supertest as the HTTP request library
require('should') // and use should as the assertion library
// For Koa, you should exports app.listen() or app.callback() in your app entry
const app = require('./my-express-app.js')
after(function () {
doc.emit('api-documentation.apib')
})
doc.group('Products').is(doc => {
describe('#Products', function () {
doc.action('Get all products').is(doc => {
it('should get all products', function () {
// Write specs towards your API endpoint as you would normally do
// Just decorate with some utility methods
return request(app)
.get(doc.get('/products'))
.query(doc.query({
minPrice: doc.val(10, 'Only products of which price >= this value should be returned')
}))
.expect(200)
.then(res => {
body = doc.resBody(res.body)
body.desc('List of all products')
.should.not.be.empty()
body[0].should.have.properties('id', 'name', 'price')
body[0].price.desc('Price of this product').should.be.a.Number
})
})
})
})
})
I wonder what way should I organize my routing in expressJS :
Params parsing in Controller
router.get('/users/:id', UserController.get);
class UserController {
get(res, req) {
var id = res.params.id;
UserModel.get(id, function(user) {
res.send(user);
}
}
}
Params parsing in Route
router.get('/users/:id', function(req, res) {
var id = req.params.id;
UserController.get(id, function(user) {
res.json(user);
}
});
class UserController {
get(id, fn) {
UserModel.get(id, fn);
}
}
I find the second version Params parsing in Route easier for
unit test
In case of change in the URL params or request body
but most of the example I found use the first version, why ?
If you consider a much larger, messier real world application, with route names that no longer match controller names etc., it might be beneficial to place the full routing table (all of the router.xxx calls) in one place, such as a routes.js. For a given url, this makes it much simpler for a new developer to figure out which code handles which url.
If you included all of the parameter parsing in your routes.js, it would become really messy and you'd likely lose some of the benefit of having collected all that into one file in the first place.
That said, there's no reason why you cant have the best of both worlds by separating the routing, the parameter parsing/response formatting, and the controller logic each into their own modules.