Two separate deploy configurations on travis - node.js

os:
- osx
language: node_js
node_js:
- '12'
dist: xenial
services:
- xvfb
before_script:
- export DISPLAY=:99.0
install:
- npm set progress=false
- npm install
script:
- ng lint
- npm run build:electron
deploy:
provider: releases
api_key: "$GITHUB_OAUTH_TOKEN"
file_glob: true
file:
- "release/*.dmg"
- "release/*.dmg.blockmap"
name: Build $(date +'%d.%m.%Y %R')
language: node_js
node_js:
- '12'
branches:
only:
- web-app
before_script:
- npm install -g #angular/cli
script:
- npm install
- npm run build
deploy:
skip_cleanup: true
provider: firebase
token:
secure: ""
I have two config files for travis.
How to merge them?
I tried different ways, but it makes errors like: 'duplicate deploy keyword'.
I want to deploy first part from branch master and second from web-app.

You can create and populate a Dockerfile.dev file with your first code block and populate the second code block into a Dockerfile. Then create a docker-compose.yml file (still in your root directory) it would connect and run both files
something structured like this
version: '3'
services:
web:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "1500:1500"
volumes:
- /app/node_modules
- .:/app

Related

CircleCI config.yml for nodejs

version: 2
jobs:
test:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "Running tests"
- run: npm install
- run: npm test
build:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "build project"
- npm install
- npm run build
workflows:
version: 2
test_build:
jobs:
- test
- build:
requires:
- test
The above YAML is my config.yml for CircleCI, but I get this error
Config does not conform to schema: {:workflows {:test_and_build {:jobs [nil {:build (not (map? nil)), :requires (not (map? a-clojure.lang.LazySeq))}]}}}
Another observation is if I run the jobs in parallel, they run without any errors.
That is if I remove the requires: - test as shown below
workflows:
version: 2
test_build:
jobs:
- test
- build
build is a job, just like test, and should be indented the same way it is:
version: 2
jobs:
test:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "Running tests"
- run: npm install
- run: npm test
build:
docker:
- image: circleci/node:12.16
steps:
- checkout
- run: echo "build project"
- npm install
- npm run build
workflows:
version: 2
test_build:
jobs:
- test
- build:
requires:
- test
I tried this one and it worked. The problem with the previous one seemed to be related to versioning. CircleCI cloud 2.1 and CircleCI server 2. Also, I decided to use the node orbs this time.
version: 2.1
orbs:
node: circleci/node#3.0.1
jobs:
build:
working_directory: ~/backend_api
executor: node/default
steps:
- checkout
- node/install-npm
- node/install-packages:
app-dir: ~/backend_api
cache-path: node_modules
override-ci-command: npm i
- persist_to_workspace:
root: .
paths:
- .
test:
docker:
- image: cimg/node:current
steps:
- attach_workspace:
at: .
- run:
name: Test
command: npm test
workflows:
version: 2
build_and_test:
jobs:
- build
- test:
requires:
- build

GitLab AutoDevops Environment Issues

So I am new to Gitlab Autodevops having switched from Travis and Github. The issue I am currently facing is that when I make a push and the pipeline kicks in, it doesn't see any of my list environment variables. I set production, and testing environment variables for mongodb and redis, but during the pipeline, it's trying to connect to localhost for both, totally ignoring the environment variables set in CI/CD settings. See pictures below:
Dockerfile
WORKDIR /app
COPY package*.json ./
RUN apk add --update alpine-sdk nodejs npm python
RUN LD_LIBRARY_PATH=/usr/local/lib64/:$LD_LIBRARY_PATH && export LD_LIBRARY_PATH && npm i
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: "3.7"
services:
backend:
container_name: dash-loan
environment:
MONGODB_PRODUCTION_URI: ${MONGODB_PRODUCTION_URI}
MONGODB_TEST_URI: ${MONGODB_TEST_URI}
REDIS_PRODUCTION_URL: ${REDIS_PRODUCTION_URL}
REDIS_TEST_URL: ${REDIS_TEST_URL}
PM2_SECRET_KEY: ${PM2_SECRET_KEY}
PM2_PUBLIC_KEY: ${PM2_PUBLIC_KEY}
PM2_MACHINE_NAME: ${PM2_MACHINE_NAME}
PORT: ${PORT}
MODE_ENV: ${NODE_ENV}
restart: always
build: .
ports:
- "8080:3000"
links:
- mongodb
- redis
mongodb:
container_name: mongo
environment:
MONGO_INITDB_DATABASE: dashloan
MONGO_INITDB_ROOT_USERNAME: sampleUser
MONGO_INITDB_ROOT_PASSWORD: samplePassword
restart: always
image: mongo
ports:
- "27017-27019:27017-27019"
volumes:
- ./src/database/init-mongo.js:/docker-entrypoint-point.initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
redis:
container_name: redis
restart: always
image: redis:5.0
ports:
- "6379:6379"
volumes:
mongo-volume:
.gitlab-ci.yml
image: node:latest
services:
- mongo:latest
- redis:latest
cache:
paths:
- node_modules/
job:
script:
- npm i
- npm test
I need help on how to make sure the test pipeline is using the environment variables I set; and not trying to connect to localhost which fails.
Error on gitlab pipeline
Variables in Gitlab
GKE which is running fine
You could use shell runner instead of docker runner and then just call docker-compose in before script.
cache:
paths:
- node_modules/
job:
before_script:
- docker-compose up -d
script:
- npm i
- npm test
after_script:
- docker-compose down

Gitlab Pipelines Stages take hours or days to show result (passed, failed) - docker, node app

I'm using Gitlab continuous integration (.gitlab-ci.yml) and docker, docker-compose to build, test and deploy my node app, but build and test takes so much time to complete on gitlab pipeline (in my local docker app builds and tests are running smoothly), I think this is not normal behavior of gitlab ci and i think i'm missing something or i'm using a wrong configuration
please check the configuration (.gitlab-ci.yml) below and screen shot of pipelines at the bottom
.gitlab-ci.yml
# GitLab CI Docker Image
image: node:6.10.0
# Build - Build necessary JS files
# Test - Run tests
# Deploy - Deploy application to ElasticBeanstalk
stages:
- build
- test
- deploy
# Configuration
variables:
POSTGRES_DB: f_ci
POSTGRES_USER: f_user
POSTGRES_PASSWORD: sucof
services:
- postgres:latest
cache:
paths:
- node_modules/
# Job: Build
# Installs npm packages
# Passes node_modules/ onto next steps using artifacts
build:linux:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
tags:
- f-api
# Job: Test
# Run tests against our application
# If this fails, we do not deploy
test:linux:
stage: test
variables:
NODE_ENV: continuous_integration
script:
- ./node_modules/.bin/sequelize db:migrate --env=continuous_integration
- ./node_modules/.bin/sequelize db:seed:all --env=continuous_integration
- npm test
tags:
- f-api
# Job: Deploy
# Staging environment
deploy:staging:aws:
stage: deploy
script:
- apt-get update -qq
- apt-get -qq install python3 python3-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python3 get-pip.py
- pip install awsebcli --upgrade
- mkdir ~/.aws/
- touch ~/.aws/credentials
- printf "[default]\naws_access_key_id = %s\naws_secret_access_key = %s\n" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" >> ~/.aws/credentials
- touch ~/.aws/config
- printf "[profile adm-eb-cli]\naws_access_key_id = %s\naws_secret_access_key = %s\n" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" >> ~/.aws/config
- eb deploy f-api-stg
environment:
name: staging
url: http://f-api-stg.gxnvwbgfma.ap-southeast-1.elasticbeanstalk.com
tags:
- f-api
only:
- staging
# Job: Deploy
# Production environment
deploy:prod:aws:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: http://f-api.gxnvwbgfma.ap-southeast-1.elasticbeanstalk.com
when: manual
tags:
- f-api
only:
- master
Dockerfile
FROM node:6.10.0
MAINTAINER Theodore GARSON-CORBEAUX <tgcorbeaux#maltem.com>
# Create app directory
ENV HOME=/usr/src/app
RUN mkdir -p $HOME
WORKDIR $HOME
# Install api dependencies
ADD package.json /usr/src/app/package.json
RUN npm install
ADD . /usr/src/app
EXPOSE 3000
CMD ["npm","start"]
docker-compose.yml
version: "2.1"
services:
db_dev:
image: postgres:latest
ports:
- "49170:5432"
environment:
- POSTGRES_USER=f_user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=f_dev
healthcheck:
test: "exit 0"
db_test:
image: postgres:latest
ports:
- "49171:5432"
environment:
- POSTGRES_USER=f_user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=f_test
healthcheck:
test: "exit 0"
app:
build: .
environment:
- NODE_ENV=development
command: "npm start"
ports:
- "49160:3000"
depends_on:
db_dev:
condition: service_healthy
migrate:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:migrate --env ${NODE_ENV}"
depends_on:
db_dev:
condition: service_healthy
db_test:
condition: service_healthy
healthcheck:
test: "exit 0"
populate_db:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:seed:all --env ${NODE_ENV}"
depends_on:
migrate:
condition: service_healthy
healthcheck:
test: "exit 0"
depopulate_db:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:seed:undo:all --env ${NODE_ENV}"
depends_on:
migrate:
condition: service_healthy
healthcheck:
test: "exit 0"
test:
build: .
environment:
- NODE_ENV=test
command: "npm test"
depends_on:
populate_db:
condition: service_healthy

docker-compose up didn't finish npm install.

I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.

Run node app in Travis

I currently build a server side app with node js.
To test it, I use Travis, which runs npm test by default.
Now I want also to test if the dependencies are correct and therefore start the app within Travis with
nodejs app.js
How can I run this task in Travis?
You can run any task like you would expect it to be on a unix shell:
language: node_js
node_js:
- "5"
before_script:
- npm install
script:
- node app.js
- npm test
However your purpose is covered already by the npm install command. If this fails and also subsequently your npm test fails, the build will not succeed.
For more complicated examples, where you need to run actual servers, say in API end-2-end testing I would use docker-compose instead. But this is way too much here.
travis.yml
language: node_js
sudo: required
services:
- docker
cache:
directories:
- node_modules
node_js:
- 5
before_install:
- npm install -g node-gyp
before_script:
- npm install
- npm install -g standard
- docker-compose build
- docker-compose up -d
- sleep 3
script:
- npm test
after_script:
- docker-compose kill
docker-compose.yml
api1:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
api2:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
mongo:
image: mongo
ports:
- "27017:27017"
command: "--smallfiles --logpath=/dev/null"
redis:
image: redis
ports:
- "6379:6379"
haproxy:
image: haproxy:1.5
volumes:
- ./cluster:/usr/local/etc/haproxy/
links:
- "api1"
- "api2"
ports:
- 80:80
- 70:70
expose:
- "80"
- "70"
The original simple answer is close, but I needed one modification found on this forum: https://github.com/travis-ci/travis-ci/issues/1321
language: node_js
node_js:
- "6"
before_script:
- npm install
script:
- node app.js &
- npm test
I needed the ampersand (&) at the end of the node app.js to start my server process in the background. Otherwise, it runs the server in the foreground, waits, and never gets to the npm test.

Resources