I am trying to configure angular with docker image and run with docker-compose.
Here is my dockerfile
# Use official node image as the base image
FROM node:16-alpine3.12 as build
# Set the working directory
WORKDIR /usr/local/app
# Add the source code to app
COPY ./ ./
# Install all the dependencies
RUN npm install
# Generate the build of the application
RUN npm run build:prod
# Stage 2: Serve app with nginx server
# Use official nginx image as the base image
FROM nginx:latest
# Copy the build output to replace the default nginx contents.
COPY --from=build /usr/local/app/dist/ /usr/share/nginx/html
#COPY /nginx.conf /etc/nginx/conf.d/default.conf
# Assign permisson
# https://stackoverflow.com/questions/49254476/getting-forbidden-error-while-using-nginx-inside-docker
# RUN chown nginx:nginx /usr/share/nginx/html/*
# Expose port 80
EXPOSE 80
EXPOSE 443
It runs correctly if I run docker build and docker run locally. But, after push to github with yml below:
name: Docker Image CI For Web
on:
push:
tags:
- 'web-v*'
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout#v2
- name: Log in to Docker Hub
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: edward8520/meekou
- name: Build and push Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: ./angular/
file: ./angular/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
And then run docker-compose with below,
version: '3.9'
services:
meekou-web:
container_name: 'meekou-web-test'
image: 'edward8520/meekou:web-v0.0.4'
ports:
- "8888:80"
- "8889:443"
environment:
- NODE_ENV=production
volumes:
- ./meekou-web/:/usr/share/nginx/html
expose:
- "80"
- "443"
It throw error 403 Forbidden
Related
Hello I have a typescript server with a build script that looks like
"`build": "rm -rf build && tsc && cp package*.json build && cp Dockerfile build && npm ci --prefix build --production"`
This creates a new build directory and copies the Dockerfile to the build directory, so the deployed application should be run on the build directory.
I want to automate deployment to Cloud Run using github workflows so I created a .yaml file but during the run portion I am confused how I can build the docker image and push it from my build directory
- name: Enable the necessary APIs and enable docker auth
run: |-
gcloud services enable containerregistry.googleapis.com
gcloud services enable run.googleapis.com
gcloud --quiet auth configure-docker
- name: Build and tag image
run: |-
docker build . --tag "gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA"
- name: Push image to GCR
run: |-
docker push gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
My question is how can I insure to run the docker commands from the build directory ?
On the docker build command, replace the . with build/.
Here's a a full reference of an example workflow including the step to deploy the image to Cloud Run.
on:
push:
branches:
- example-build-deploy
name: Build and Deploy a Container
env:
PROJECT_ID: ${{ secrets.GCP_PROJECT }}
SERVICE: hello-cloud-run
REGION: us-central1
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
project_id: ${{ env.PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true # Set to true to authenticate the Cloud Run action
- name: Authorize Docker push
run: gcloud auth configure-docker
- name: Build and Push Container
run: |-
docker build -t gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA build/
docker push gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
- name: Deploy to Cloud Run
id: deploy
uses: google-github-actions/deploy-cloudrun#v0
with:
service: ${{ env.SERVICE }}
image: gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
region: ${{ env.REGION }}
- name: Show Output
run: echo ${{ steps.deploy.outputs.url }}
You may also check the full Github repository sample here.
I have a multi-container app with source code stored on Github. Essentially, there's only one part in active development, and the other containers are either stable (like Nginx with special settings) or external (like redis).
My question is: how can I use Github Actions for the deployment to Azure App Service?
It's rather well-described for a single-container App, and I'm already able to push my image to the Container Registry with an Action. But then I still have to go to Azure web interface and trigger docker-compose from there. Or alternatively, I can trigger docker-compose from the Azure CLI from my local machine.
But the actual problem is to trigger docker-compose from the Github Action (in order to deploy every time my PR into master is validated).
Any ideas?
As a reference point: my docker-compose.yml is like this:
version: '3'
services:
nginx:
image: mycr.azurecr.io/nginx:dev-latest
ports:
- "80:80"
- "2222:2222"
volumes:
- asset-volume/app/static
depends_on:
- app
restart: always
app:
image: mycr.azurecr.io/django:dev-latest
ports:
- "8000:8000"
volumes:
- asset-volume:/app/static
- app-volume:/app
- api-documents:/app/documents/storage
redis:
image: redis:alpine
celery:
restart: always
command: celery -A mainApp worker -l info
image: mycr.azurecr.io/django:dev-latest
volumes:
- app-volume:/app
working_dir: /app
depends_on:
- app
- nginx
- redis
volumes:
asset-volume
app-volume
The real solution seems to be possible only via Azure CLI, but for the moment I came up with a partial one. That's why I can't mark my own answer as an "accepted solution".
In my action (see below):
I login to Azure and to Docker (all credentials are stored in GitHub Secrets)
I build my App from a Dockerfile
Finally, I push the image to the Azure Container Registry with two tags: latest and <date>-<hash>
Then, on Azure: MyApp on AppService > Deployment Center:
Source: Container Registry
Container Type: Docker Compose
Config: the content of the docker-compose.yml
And that's it. After that, on every merge validation, the GitHub CI pushes a ready-to-go image to a Container Registry. And I only need to hit the "Restart" button on the GUI to restart my app. Then the new latest image will be loaded by Docker. Of course, it's still a manual action, but it's better than nothing.
My delpoyment.yml with the action looks like this (I omit non-essential details):
name: 'Deployment to Azure'
on:
push:
env:
DEV_URL: 'my-dev-app.example.com'
DEV_DBNAME: 'app-dev-db'
PROD_URL: ''my-app.example.com''
PROD_DBNAME: 'app-dev-db'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: 'Get the branch name'
id: branch-name
uses: tj-actions/branch-names#v5
- name: 'Set RAW_BRANCH variable'
run: echo "RAW_BRANCH=${{ steps.branch-name.outputs.current_branch }}" >> $GITHUB_ENV
- name: 'Checkout repo'
uses: actions/checkout#v2
- name: 'Sets branch-related variables'
# `main` -> `prod`, `dev` -> `dev`,
# everything else -> `feat`:
run: |
if [[ $RAW_BRANCH == 'main' ]]; then
echo "BRANCH=prod" >> $GITHUB_ENV
elif [[ $RAW_BRANCH == 'dev' ]]; then
echo "BRANCH=$RAW_BRANCH" >> $GITHUB_ENV
echo "DBNAME=${{ env.DEV_DBNAME }}" >> $GITHUB_ENV
echo "URL=${{ env.DEV_URL }}" >> $GITHUB_ENV
else
echo "BRANCH=feat" >> $GITHUB_ENV; fi
echo "DBNAME=${{ env.DEV_DBNAME }}" >> $GITHUB_ENV
echo "URL=${{ env.DEV_URL }}" >> $GITHUB_ENV
- name: 'Set SHA variable'
run: echo "SHA=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: 'Set TAG variable'
run: echo "TAG=${{ env.BRANCH }}-$(date "+%Y.%m.%d")-${{ env.SHA }}" >> $GITHUB_ENV
- name: 'Login via Azure CLI'
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: 'Login to Container Registry'
uses: azure/docker-login#v1
with:
login-server: ${{ secrets.DOCKER_REGISTRY_SERVER_URL }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: 'build and push'
run: |
docker build -t ${{ secrets.DOCKER_REGISTRY_SERVER_URL }}/backend:${{ env.TAG }} \
-t ${{ secrets.DOCKER_REGISTRY_SERVER_URL }}/backend:${{ env.BRANCH }}-latest .
docker push --all-tags ${{ secrets.DOCKER_REGISTRY_SERVER_URL }}/backend
And then my Config for, say, dev instance in the Deployment Center looks like:
version: '3'
services:
nginx:
image: myacr.azurecr.io/nginx:latest
ports:
- "80:80"
- "2222:2222"
volumes:
- asset-volume-dev:/app/static
depends_on:
- app
restart: always
app:
image: myacr.azurecr.io/backend:dev-latest
ports:
- "8000:8000"
volumes:
- asset-volume-dev:/app/static
- app-volume-dev:/app
redis:
image: redis:alpine
celery:
restart: always
command: celery -A superDuperApp worker -l info
image: myapp.azurecr.io/backend:dev-latest
volumes:
- app-volume-dev:/app
working_dir: /app
depends_on:
- app
- nginx
- redis
volumes:
asset-volume-dev:
app-volume-dev:
⚠️ NB: For some unclear reason, if I leave the docker-compose.yml in the root directory of the app, then the deployment script will be some weird combination of both that yml and what it written in the Config in Azure GUI. Therefore I had to remove that docker-compose.yml from the root folder of the repo.
I have to add SSL to my express backend because of mixed-content errors from my deployment, which runs obviously on https already. After creating a subdomain for my api, I have successfully created let´s encrypt certificates via certbot on my production machine via ssh.
The problem I have is, I can´t access the certificates from my express server, because I am building it as a docker image. I am searching a fast way for adding ssl to my backend. I have found some articles about nginx, but as I am new to docker, express etc. I can´t find a good guide on which supports my configuration. Because of that I have tried to keep nginx out of the environment, I´m not really sure if this is the way to go..
I am currently stuck at loading the certificates via volume to the docker image, which should then be available in my express config.
Dockerfile
FROM node:14-alpine
ENV NODE_ENV=production SERVER_PORT_HTTP=8080 SERVER_PORT_HTTPS=443
WORKDIR /usr/src/app
# Copy working directory from repository
COPY ["package.json", "package-lock.json*", "./"]
# for reading the ssl certificate created by certbot on host machine
VOLUME [ "/etc/letsencrypt/live/api.example.com" ]
# Needed for next step
COPY prisma ./prisma/
RUN npm install
# Prisma must generate Types bevor starting server
RUN npm i -g #prisma/cli
RUN prisma generate
COPY . .
EXPOSE ${SERVER_PORT_HTTP} ${SERVER_PORT_HTTPS}
CMD [ "npm", "run", "start" ]
Express app.js
if (process.env.NODE_ENV?.trim() === 'production') {
const https = require('https');
const fs = require('fs');
let privateKey = fs.readFileSync(__dirname + '/privatekey.pem');
let certificate = fs.readFileSync(__dirname + '/certificate.pem');
https.createServer({ key: privateKey, cert: certificate }, app).listen(process.env.SERVER_PORT_HTTPS, function () {
console.log('Listening on port ' + process.env.SERVER_PORT_HTTPS);
});
} else {
app.listen(process.env.SERVER_PORT_HTTP, function () {
console.log('Listening on port ' + process.env.SERVER_PORT_HTTP);
});
}
GitHub Workflow
name: GHCR-DigitalOcean
on:
push:
branches:
- main
paths:
- "backend/**"
jobs:
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
docker kill $(docker ps -q -a)
docker system prune -a
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
docker pull ghcr.io/${{ github.repository }}:latest
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
So I am new to Gitlab Autodevops having switched from Travis and Github. The issue I am currently facing is that when I make a push and the pipeline kicks in, it doesn't see any of my list environment variables. I set production, and testing environment variables for mongodb and redis, but during the pipeline, it's trying to connect to localhost for both, totally ignoring the environment variables set in CI/CD settings. See pictures below:
Dockerfile
WORKDIR /app
COPY package*.json ./
RUN apk add --update alpine-sdk nodejs npm python
RUN LD_LIBRARY_PATH=/usr/local/lib64/:$LD_LIBRARY_PATH && export LD_LIBRARY_PATH && npm i
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: "3.7"
services:
backend:
container_name: dash-loan
environment:
MONGODB_PRODUCTION_URI: ${MONGODB_PRODUCTION_URI}
MONGODB_TEST_URI: ${MONGODB_TEST_URI}
REDIS_PRODUCTION_URL: ${REDIS_PRODUCTION_URL}
REDIS_TEST_URL: ${REDIS_TEST_URL}
PM2_SECRET_KEY: ${PM2_SECRET_KEY}
PM2_PUBLIC_KEY: ${PM2_PUBLIC_KEY}
PM2_MACHINE_NAME: ${PM2_MACHINE_NAME}
PORT: ${PORT}
MODE_ENV: ${NODE_ENV}
restart: always
build: .
ports:
- "8080:3000"
links:
- mongodb
- redis
mongodb:
container_name: mongo
environment:
MONGO_INITDB_DATABASE: dashloan
MONGO_INITDB_ROOT_USERNAME: sampleUser
MONGO_INITDB_ROOT_PASSWORD: samplePassword
restart: always
image: mongo
ports:
- "27017-27019:27017-27019"
volumes:
- ./src/database/init-mongo.js:/docker-entrypoint-point.initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
redis:
container_name: redis
restart: always
image: redis:5.0
ports:
- "6379:6379"
volumes:
mongo-volume:
.gitlab-ci.yml
image: node:latest
services:
- mongo:latest
- redis:latest
cache:
paths:
- node_modules/
job:
script:
- npm i
- npm test
I need help on how to make sure the test pipeline is using the environment variables I set; and not trying to connect to localhost which fails.
Error on gitlab pipeline
Variables in Gitlab
GKE which is running fine
You could use shell runner instead of docker runner and then just call docker-compose in before script.
cache:
paths:
- node_modules/
job:
before_script:
- docker-compose up -d
script:
- npm i
- npm test
after_script:
- docker-compose down
I'm using Gitlab continuous integration (.gitlab-ci.yml) and docker, docker-compose to build, test and deploy my node app, but build and test takes so much time to complete on gitlab pipeline (in my local docker app builds and tests are running smoothly), I think this is not normal behavior of gitlab ci and i think i'm missing something or i'm using a wrong configuration
please check the configuration (.gitlab-ci.yml) below and screen shot of pipelines at the bottom
.gitlab-ci.yml
# GitLab CI Docker Image
image: node:6.10.0
# Build - Build necessary JS files
# Test - Run tests
# Deploy - Deploy application to ElasticBeanstalk
stages:
- build
- test
- deploy
# Configuration
variables:
POSTGRES_DB: f_ci
POSTGRES_USER: f_user
POSTGRES_PASSWORD: sucof
services:
- postgres:latest
cache:
paths:
- node_modules/
# Job: Build
# Installs npm packages
# Passes node_modules/ onto next steps using artifacts
build:linux:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
tags:
- f-api
# Job: Test
# Run tests against our application
# If this fails, we do not deploy
test:linux:
stage: test
variables:
NODE_ENV: continuous_integration
script:
- ./node_modules/.bin/sequelize db:migrate --env=continuous_integration
- ./node_modules/.bin/sequelize db:seed:all --env=continuous_integration
- npm test
tags:
- f-api
# Job: Deploy
# Staging environment
deploy:staging:aws:
stage: deploy
script:
- apt-get update -qq
- apt-get -qq install python3 python3-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python3 get-pip.py
- pip install awsebcli --upgrade
- mkdir ~/.aws/
- touch ~/.aws/credentials
- printf "[default]\naws_access_key_id = %s\naws_secret_access_key = %s\n" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" >> ~/.aws/credentials
- touch ~/.aws/config
- printf "[profile adm-eb-cli]\naws_access_key_id = %s\naws_secret_access_key = %s\n" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY" >> ~/.aws/config
- eb deploy f-api-stg
environment:
name: staging
url: http://f-api-stg.gxnvwbgfma.ap-southeast-1.elasticbeanstalk.com
tags:
- f-api
only:
- staging
# Job: Deploy
# Production environment
deploy:prod:aws:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: http://f-api.gxnvwbgfma.ap-southeast-1.elasticbeanstalk.com
when: manual
tags:
- f-api
only:
- master
Dockerfile
FROM node:6.10.0
MAINTAINER Theodore GARSON-CORBEAUX <tgcorbeaux#maltem.com>
# Create app directory
ENV HOME=/usr/src/app
RUN mkdir -p $HOME
WORKDIR $HOME
# Install api dependencies
ADD package.json /usr/src/app/package.json
RUN npm install
ADD . /usr/src/app
EXPOSE 3000
CMD ["npm","start"]
docker-compose.yml
version: "2.1"
services:
db_dev:
image: postgres:latest
ports:
- "49170:5432"
environment:
- POSTGRES_USER=f_user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=f_dev
healthcheck:
test: "exit 0"
db_test:
image: postgres:latest
ports:
- "49171:5432"
environment:
- POSTGRES_USER=f_user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=f_test
healthcheck:
test: "exit 0"
app:
build: .
environment:
- NODE_ENV=development
command: "npm start"
ports:
- "49160:3000"
depends_on:
db_dev:
condition: service_healthy
migrate:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:migrate --env ${NODE_ENV}"
depends_on:
db_dev:
condition: service_healthy
db_test:
condition: service_healthy
healthcheck:
test: "exit 0"
populate_db:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:seed:all --env ${NODE_ENV}"
depends_on:
migrate:
condition: service_healthy
healthcheck:
test: "exit 0"
depopulate_db:
build: .
environment:
- NODE_ENV
command: "./node_modules/.bin/sequelize db:seed:undo:all --env ${NODE_ENV}"
depends_on:
migrate:
condition: service_healthy
healthcheck:
test: "exit 0"
test:
build: .
environment:
- NODE_ENV=test
command: "npm test"
depends_on:
populate_db:
condition: service_healthy