Run node app in Travis - node.js

I currently build a server side app with node js.
To test it, I use Travis, which runs npm test by default.
Now I want also to test if the dependencies are correct and therefore start the app within Travis with
nodejs app.js
How can I run this task in Travis?

You can run any task like you would expect it to be on a unix shell:
language: node_js
node_js:
- "5"
before_script:
- npm install
script:
- node app.js
- npm test
However your purpose is covered already by the npm install command. If this fails and also subsequently your npm test fails, the build will not succeed.
For more complicated examples, where you need to run actual servers, say in API end-2-end testing I would use docker-compose instead. But this is way too much here.
travis.yml
language: node_js
sudo: required
services:
- docker
cache:
directories:
- node_modules
node_js:
- 5
before_install:
- npm install -g node-gyp
before_script:
- npm install
- npm install -g standard
- docker-compose build
- docker-compose up -d
- sleep 3
script:
- npm test
after_script:
- docker-compose kill
docker-compose.yml
api1:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
api2:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
mongo:
image: mongo
ports:
- "27017:27017"
command: "--smallfiles --logpath=/dev/null"
redis:
image: redis
ports:
- "6379:6379"
haproxy:
image: haproxy:1.5
volumes:
- ./cluster:/usr/local/etc/haproxy/
links:
- "api1"
- "api2"
ports:
- 80:80
- 70:70
expose:
- "80"
- "70"

The original simple answer is close, but I needed one modification found on this forum: https://github.com/travis-ci/travis-ci/issues/1321
language: node_js
node_js:
- "6"
before_script:
- npm install
script:
- node app.js &
- npm test
I needed the ampersand (&) at the end of the node app.js to start my server process in the background. Otherwise, it runs the server in the foreground, waits, and never gets to the npm test.

Related

Docker - Yarn "There appears to be trouble with the npm registry (returned undefined). Retrying..." when running yarn install

I have 3 containers, one for mySQL db, one for the nodejs server, and one for client (create-react-app). This error is occuring on the client container
Has been working previously until I tried to get everything set up for a production environment using Nginx in its own container (I've since deleted all the code that configures that, but the error still persists).
The error:
Step 9/12 : RUN yarn install
---> Running in 45145c591167
yarn install v1.22.19
info No lockfile found.
[1/4] Resolving packages...
warning #svgr/plugin-svgo > svgo > stable#0.1.8: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility
info There appears to be trouble with the npm registry (returned undefined). Retrying...
It continues retrying until it eventually times out.
Things I've tried so far:
I've tried docker system prune -a and docker volume prune then running docker-compose up again, but it still fails.
I've also tried deleting node_modules and yarn.lock, still doesn't work.
If I try to create a new directory (outside of docker, just local on my pc), run yarn init and add all the dependencies into the package.json file, it installs correctly. So it appears to be an issue within my Docker containers.
The Code
docker-compose.yml
version: '3.7'
services:
db:
container_name: db
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: database_development
MYSQL_PASSWORD: root
cap_add:
- SYS_NICE
client:
container_name: client
build: ./client
ports:
- 3000:3000
volumes:
- ./client:/app
- /app/node_modules
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
server:
container_name: server
build: ./server
ports:
- 8080:8080
- 4242:4242
volumes:
- ./server:/app
- /app/node_modules
depends_on:
- db
environment:
- NODE_ENV=development
client DockerFile
FROM node:16-alpine
RUN apk update && apk upgrade
RUN mkdir /app
WORKDIR /app
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV}
COPY package.json /app
RUN yarn install
COPY . .
EXPOSE 3000
CMD [ "yarn", "start" ]

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

Two separate deploy configurations on travis

os:
- osx
language: node_js
node_js:
- '12'
dist: xenial
services:
- xvfb
before_script:
- export DISPLAY=:99.0
install:
- npm set progress=false
- npm install
script:
- ng lint
- npm run build:electron
deploy:
provider: releases
api_key: "$GITHUB_OAUTH_TOKEN"
file_glob: true
file:
- "release/*.dmg"
- "release/*.dmg.blockmap"
name: Build $(date +'%d.%m.%Y %R')
language: node_js
node_js:
- '12'
branches:
only:
- web-app
before_script:
- npm install -g #angular/cli
script:
- npm install
- npm run build
deploy:
skip_cleanup: true
provider: firebase
token:
secure: ""
I have two config files for travis.
How to merge them?
I tried different ways, but it makes errors like: 'duplicate deploy keyword'.
I want to deploy first part from branch master and second from web-app.
You can create and populate a Dockerfile.dev file with your first code block and populate the second code block into a Dockerfile. Then create a docker-compose.yml file (still in your root directory) it would connect and run both files
something structured like this
version: '3'
services:
web:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "1500:1500"
volumes:
- /app/node_modules
- .:/app

GitLab AutoDevops Environment Issues

So I am new to Gitlab Autodevops having switched from Travis and Github. The issue I am currently facing is that when I make a push and the pipeline kicks in, it doesn't see any of my list environment variables. I set production, and testing environment variables for mongodb and redis, but during the pipeline, it's trying to connect to localhost for both, totally ignoring the environment variables set in CI/CD settings. See pictures below:
Dockerfile
WORKDIR /app
COPY package*.json ./
RUN apk add --update alpine-sdk nodejs npm python
RUN LD_LIBRARY_PATH=/usr/local/lib64/:$LD_LIBRARY_PATH && export LD_LIBRARY_PATH && npm i
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: "3.7"
services:
backend:
container_name: dash-loan
environment:
MONGODB_PRODUCTION_URI: ${MONGODB_PRODUCTION_URI}
MONGODB_TEST_URI: ${MONGODB_TEST_URI}
REDIS_PRODUCTION_URL: ${REDIS_PRODUCTION_URL}
REDIS_TEST_URL: ${REDIS_TEST_URL}
PM2_SECRET_KEY: ${PM2_SECRET_KEY}
PM2_PUBLIC_KEY: ${PM2_PUBLIC_KEY}
PM2_MACHINE_NAME: ${PM2_MACHINE_NAME}
PORT: ${PORT}
MODE_ENV: ${NODE_ENV}
restart: always
build: .
ports:
- "8080:3000"
links:
- mongodb
- redis
mongodb:
container_name: mongo
environment:
MONGO_INITDB_DATABASE: dashloan
MONGO_INITDB_ROOT_USERNAME: sampleUser
MONGO_INITDB_ROOT_PASSWORD: samplePassword
restart: always
image: mongo
ports:
- "27017-27019:27017-27019"
volumes:
- ./src/database/init-mongo.js:/docker-entrypoint-point.initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
redis:
container_name: redis
restart: always
image: redis:5.0
ports:
- "6379:6379"
volumes:
mongo-volume:
.gitlab-ci.yml
image: node:latest
services:
- mongo:latest
- redis:latest
cache:
paths:
- node_modules/
job:
script:
- npm i
- npm test
I need help on how to make sure the test pipeline is using the environment variables I set; and not trying to connect to localhost which fails.
Error on gitlab pipeline
Variables in Gitlab
GKE which is running fine
You could use shell runner instead of docker runner and then just call docker-compose in before script.
cache:
paths:
- node_modules/
job:
before_script:
- docker-compose up -d
script:
- npm i
- npm test
after_script:
- docker-compose down

docker-compose up didn't finish npm install.

I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.

Resources