Integration Tests with Docker and Bitbucket pipelines - node.js

I would like to run my integration tests as a part of the Bitbucket pipelines CI. My integration tests test a NodeJS backend that runs against an empty MongoDB database. To enable this I want to create a Docker Image that Bitbucket pipelines can pull from a docker image repository.
My bitbucket-pipelines.yml will be something like:
image: <my image with nodejs and a mongodb linked to it>
pipelines:
default:
- step:
script:
- npm test
Now I only need to create a docker image with nodejs and mongodb configured properly. I am able to build an environment by creating the following docker-compose.yml file:
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
- "9090:8080"
links:
- mongo
mongo:
image: mongo
ports:
- "27018:27017"
My Dockerfile:
FROM node:7.9
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "run", "dev"]
Problem - Question
I can run locally with docker compose my environment, but how can I make a single image, instead of using docker compose so I can publish that image publicly for using it in my Bitbucket CI? I am still fresh to docker, but I already understood from the documentation that trying to install MongoDB on top of my nodeJS image is a red flag.

Bitbucket Pipeline doesn't have native support for docker compose yet.
However you can define up to 3 services in the bitbucket-pipelines.yml. Documentation available at: https://confluence.atlassian.com/bitbucket/service-containers-for-bitbucket-pipelines-874786688.html

Related

How to build a react/vue application outside of a docker container

I have several applications (vue & react each of the applications is built on a different version of the node). I want to set up a project deployment so that I can run a docker container with the correct version of the node for each of the projects. A build (npm i & npm run build) should happen in the container, but I go to give the result from the container to /var/www/project_name already on the server itself.
Next, set up a container with nginx which, depending on the subdomain, will give the desired build
My question is how to return the folder with files from the container to the operating system area?
my docker-compose file:
version: "3.1"
services:
redis:
restart: always
image: redis:alpine
container_name: redis
build-adminapp:
build: adminapp/
container_name: adminapp
working_dir: /var/www/adminapp
volumes:
- ./adminapp:/var/www/adminapp
build-clientapp:
build: clientapp/
container_name: clientapp
working_dir: /var/www/clientapp
volumes:
- ./clientapp:/var/www/clientapp`
my docker files:
FROM node:10-alpine as build
# Create app directory
WORKDIR /var/www/adminapp/
COPY . /var/www/adminapp/
RUN npm install
RUN npm run build
second docker file:
FROM node:12-alpine as build
# Create app directory
WORKDIR /var/www/clientapp/
COPY . /var/www/clientapp/
RUN npm install
RUN npm run build
If you already have a running container, you can use docker cp command to move files between local machine and docker containers.

Can not run a (node.js, websocket) container on EC2. Logs results posted below

So I have been trying to figure this out for a while now. I am working with node and next.js, to implement WEBRTC using socket.io. I containerized my project and it runs fine on my local machine, I uploaded it on ec2 by watching a youtube tutorial, and whenever I run the task/container it stops with these logs results. says cannot find 'pages' directory which i did initialize in compose file.
docker-compose.yml
version: '3'
services:
app:
image: webrtc
build: .
ports:
- 3000:3000
volumes:
- ./pages:/app/pages
- ./public:/app/public
- ./styles:/app/styles
- ./hooks:/app/hooks
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY next.config.js ./next.config.js
CMD ["yarn", "dev"]
I think you need to COPY the whole directory incl. "pages", currently you're only copying the config and the project files...
Instead of COPY next.config.js ./next.config.js try COPY . . if feasible.
Otherwise if it's required using docker-compose with the volumes make sure the mapping to EFS is set up correctly: https://docs.docker.com/cloud/ecs-compose-features/#persistent-volumes
This would be a related matter then: How to mount EFS inside a docker container?

Docker and NodeJS: could not connect to the container

I'm trying to dockerize a simple NodeJS API, I've tested it as a standalone and it's working. But after dockerize it I can't connect to the container, in the next image you can see two important facts: the container is permanently restarting and I could not connect to it:
After try to establish connection using a GET request the container begins to restart and after a minute later is up for short seconds.
This is my Dockerfile:
FROM node:lts-buster-slim
# Create app directory
WORKDIR /opt/myapps/noderest01
COPY package.json /opt/myapps/noderest01/package.json
COPY package-lock.json /opt/myapps/noderest01/package-lock.json
RUN npm ci
COPY . /opt/myapps/noderest01
EXPOSE 3005
CMD [ "npm", "run", "dev" ]
And this my yaml file:
services:
rest01:
container_name: rest01
ports:
- "3005:3005"
restart: always
build: .
volumes:
- rest01:/opt/myapps/noderest01
- rest01nmodules:/opt/myapps/noderest01/node_modules
networks:
- node-rest01
volumes:
rest01:
rest01nmodules:
networks:
node-rest01:
I used this command to create the image: docker-compose -f docker-compose.yaml up -d
Surely, I need to update my yaml or dockerfile to fix this, I've been searching for a while but I can't find the origin of the problem, so I want to ask for your advises how to fix and update my docker's files and connect to the container, if you have any suggestions please let me know.
Best.

Using Docker with Node image to develop a VuejS (NuxtJs) app

The situtaion
I have to work on a VueJs (NuxtJs) spa, so I'm trying to use Docker with a Node image to avoid installing it on my pc, but can't figure out how to make it work.
The project
The source cose is in its own application folder, since it is versioned, and at the root level there is the docker-compose.yaml file
The folder structure
my-project-folder
├ application
| └ ...
└ docker-compose.yaml
The docker-compose.yaml
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
image: node:lts-alpine
working_dir: /app
volumes:
- ./application:/app
The problem
The container start but quit immediately with exit status 0 (so it executed correctly), but this way I can't use it to work on the project.
Probably there is something I'm missing about the Node image or Docker in general; what i would like to to do is connecting to the docker container to run npm commands like install, run start etc and then check the application on the browser on localhost:3000 or whatever it is.
I would suggest to use Dockerfile with base image as node and then create your entrypoint which runs the application. That will eliminate the need to use volumes which is used when we want to maintain some state for our containers.
Your Dockerfile may look something like this:
FROM node:lts-alpine
RUN mkdir /app
COPY application/ /app/
EXPOSE 3000
CMD npm start --prefix /app
You can then either run it directly through docker run command or use docker-compose.yaml as following :
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
build:
context: .
ports:
- 3000:3000

How to use file from home directory in docker compose secret?

I am trying to build a docker container with private node packages in it. I have followed this guide to use secrets to reference npmrc file securely to install the dependencies. I can get this to work when building the image directly using a command like this: docker build --secret id=npm,src=$HOME/.npmrc . but I cannot get this working with docker compose. When running a docker compose build it acts like there is no npmrc file and gives me a 401 when trying to download dependencies.
I provided a stripped down version of Dockerfile and docker-compose.yml below.
Dockerfile
# syntax = docker/dockerfile:1.2
FROM node:14.17.1
COPY . .
RUN --mount=type=secret,id=npm,target=/root/.npmrc yarn --frozen-lockfile --production
EXPOSE 3000
CMD [ "npm", "start" ]
docker-compose.yml
version: '3.7'
services:
example:
build: packages/example
ports:
- "3000:3000"
secrets:
- npm
secrets:
npm:
file: ${HOME}/.npmrc
The problem appears to be that my docker-compose.yml is specifying secrets for runtime of a container vs build time. Support for build secrets from docker compose has not been implemented yet. Here is the outstanding PR: https://github.com/docker/compose/pull/7046.
For now, I have to build the image using docker build ... and reference the named image locally in docker-compose.yml instead of building through docker compose.
Since docker-compose v2.5.0 this is now possible.
Dockerfile:
# syntax=docker/dockerfile:1.2
RUN --mount=type=secret,id=mysecret,target=/root/mysecret cat /root/mysecret
docker-compose.yml
services:
my-app:
build:
context: .
secrets:
- mysecret
secrets:
mysecret:
file: ~/.npmrc

Resources