My end-to-end tests are ran against a local docker environment of a micro service I'm testing, I want to mock out a request made via that docker environment from the end-to-end test but it just doesn't seem to let me.
const requestNock = nock('https://google.com').post('/foo').reply(200);
makeRequest('127.0.0.1/bar') // This path then calls https://google.com/bar
requestNock.isDone() // Returns false
Is something like this possible? If not is there any solution to mock out an external API's response made via a local docker environment?
This is not something Nock is designed to handle. It works by intercepting outbound HTTP requests inside a single Node process.
Without knowing more about what your docker does it's hard to say what your options are. If the service inside the Docker container has no logic and it's only purpose is to proxy requests during your tests, then I'd argue that it should be removed from the test and you can Nock out the requests inside the single process. I would call these integration tests.
Since you specified your tests are end-to-end, I have to assume the Docker service is more than a proxy. However, you usually don't want to nock any requests of end-to-end tests as it negates the advantages over integration tests.
Related
I have a Spring Boot + WebFlux application with the following structure:
A controller ControllerA that uses a service ServiceA.
ServiceA uses a WebClient to call a remote microservice microservice A.
The URL of the endpoint in microservice A is configurable in application.yaml.
Now I would like to test it using SpringBootTest and JUnit 5 Jupiter with the PER_CLASS test lifecycle. For this, I want to mock the remote microservice using OkHttp MockWebServer. The server is started before the first test method (#BeforeAll) and stopped after all tests (#AfterAll).
The problem is that the Spring container starts first, and then the test methods (and their before and after callbacks) are invoked. When the container starts it initializes the beans, and also ServiceA with the URL of microservice A from application.yaml. Only afterwards is MockWebServer started and I can query it for its URL, but at that point it's already too late and ServiceA is already instantiated with the 'wrong' (real) URL.
I got it to work in various ways using various hacks (e.g. fixing the port by overriding the URL with #TestPropertySource and then forcing WebTestServer to use this port) but all have disadvantages (what if the port is takes by a parallel CI job?).
As we are only going to have more and more such integration tests in our codebase, we need a good solution for it. I am thinking of writing a custom TestExecutionListener running before all of the Spring ones (and starting the mock server), but I'm not sure if this would work (I still don't know yet how I would pass the port to the container, I have never implemented such listeners).
Question: before I implement listeners or some other custom solution I wanted to make sure that I really need to. I searched the web a bit and didn't find anything, but I don't think we are the first team to write such tests, I'm just searching badly. Does anybody know of any kind of library that would solve this problem? Or maybe a neat Spring trick that I'm currently missing but one that would help?
My vuejs front-end makes API calls using Axios.
I'm making unit tests and integration tests with Jest. I use axios-mock-adapter to mock API, but I need to mock every endpoint.
To make it easier, I'd like to use json-server. On a development machine, there is no problem : I launch json-server, then I launch Jest tests.
My question is about Travis CI:
How do I launch json-server?
How can I be sure to have an available port
How/when do I stop json-server?
I'm working on a Vue.js application that retrieves some data with ajax calls; in the dev environment I'm playing with the application in the browser using the builtin server and mocking the external resources with an API stubber that runs on its own server. Since both pieces of software are using node I'm wondering if there's a way to run both with a single command, serving the Vue.js application code and some static files for the mocked calls (which are not all GETs and require parameters, so sticking the json files in the app public folder wouldn't work).
Edit: let me try to clarify. The Vue.js application will interact with a backend service that is still being developed. On my workstation I can play with the application by just running the npm run serve command, but since there's no backend service I won't be able to try the most interesting bits. Right now I'm running saray alongside my application, serving some static json files that mock the server responses. It works fine but I'm effectively running two separate http servers, and since I'm new to the whole Vue, npm & javascript ecosystem, I was wondering if there's a better way, for instance serving the mock responses from the same (dev) server that's serving the Vue application.
Rationale:
I am using Docker in Docker (dind) with --privileged flag in my CI to build images out of source code. I only need build, tag, pull, and push commands and want to avoid all other commands such as run (considered as root of all security problems).
Note: I just want to restrict Docker's remote API and not the daemon itself!
My best options so far:
As Docker clients communicate with dind over HTTP (and not socket), I thought I could put a proxy before dind host and filter all the paths (e.g. POST /containers/create) to limit API access only to building/pushing images.
What I want to avoid:
I would never ever bind mount the docker socket on the host machine!
Update:
It seems that the API routers are hardcoded in Docker daemon.
Update 2:
I went with my best option so far and configured an nginx server which blocks specific paths (e.g. /containers). This works fine for building images as it is done in the dind image and my API restrictions doesn't screw the build process.
HOWEVER: this looks really ugly!
Docker itself doesn't provide any low level security on the API. It's basically an on or off switch. You have access to the entire thing, or not.
Securing API endpoints would require modifying Docker to include authentication and authorisation at a lower granularity or, as you suggested, adding an API proxy in between that implements your security requirements.
Something you might want to look at is Osprey from Mulesoft. It can generate API middleware including authentication mechanisms from a simple RAML definition. I think you can get away with documenting just the components you want to allow through...
#%RAML 0.8
title: Yan Foto Docker API
version: v1
baseUri: https://dind/{version}
securitySchemes:
- token_auth:
type: x-my-token
securedBy: [token_auth]
/build:
post:
queryParameters:
dockerfile: string
t: string
nocache: string
buildargs: string
/images:
/{name}:
/tag:
post:
queryParameters:
tag: string
Osprey produces the API middleware for you controlling everything, then you proxy anything that gets through the middleware to Docker.
You could use OAuth 2.0 scopes if you want to get fancy with permissions.
The docker client is a bit dumb when it comes to auth, but you can attach custom http headers to each requests which could include a key. config.json can configure HttpHeaders.
From a theoretical perspective, I believe the answer is no. When you build an image, many of the build steps will create and run a container with your requested command. So if you manage to disable running containers, the side effect of that should be to also disable building images. That said, if you protect access to running docker commands from a trusted user, and that user builds an untrusted Dockerfile, that result of that build should be isolated to the container as long as you aren't removing container protections with various CLI options.
Edit: I haven't had the time to play with it myself, but twist lock may provide the functionality you need without creating and relying on an api proxy.
If there's any. Im not really into web technologies, but have to understand some awful code written by someone else in Node.
All node.js apps are npm modules once you execute npm init. After that point, you can publish the module by executing npm publish assuming you gave it a unique name in the package.json.
If the app isn't meant to return anything, then there is no need to export anything. However, it's almost always worth exporting something to allow for unit testing deeper than just starting the app as an http server and sending it requests.
It is also sometimes useful to modify the way your app runs based on whether it is being required as a module or executed as an app. For example, say i have an express rest api server. I can run it as a standalone server on api.example.com, and then require it into another application and run it from that application directly to avoid CORS issues without having to duplicate code or deal with git submodules, instead I simply npm install the api into the application that needs it and attach it just like you would a router. www.example.com/api
app.use('/api', require('#myusername/api.example.com'))