I have to add SSL to my express backend because of mixed-content errors from my deployment, which runs obviously on https already. After creating a subdomain for my api, I have successfully created let´s encrypt certificates via certbot on my production machine via ssh.
The problem I have is, I can´t access the certificates from my express server, because I am building it as a docker image. I am searching a fast way for adding ssl to my backend. I have found some articles about nginx, but as I am new to docker, express etc. I can´t find a good guide on which supports my configuration. Because of that I have tried to keep nginx out of the environment, I´m not really sure if this is the way to go..
I am currently stuck at loading the certificates via volume to the docker image, which should then be available in my express config.
Dockerfile
FROM node:14-alpine
ENV NODE_ENV=production SERVER_PORT_HTTP=8080 SERVER_PORT_HTTPS=443
WORKDIR /usr/src/app
# Copy working directory from repository
COPY ["package.json", "package-lock.json*", "./"]
# for reading the ssl certificate created by certbot on host machine
VOLUME [ "/etc/letsencrypt/live/api.example.com" ]
# Needed for next step
COPY prisma ./prisma/
RUN npm install
# Prisma must generate Types bevor starting server
RUN npm i -g #prisma/cli
RUN prisma generate
COPY . .
EXPOSE ${SERVER_PORT_HTTP} ${SERVER_PORT_HTTPS}
CMD [ "npm", "run", "start" ]
Express app.js
if (process.env.NODE_ENV?.trim() === 'production') {
const https = require('https');
const fs = require('fs');
let privateKey = fs.readFileSync(__dirname + '/privatekey.pem');
let certificate = fs.readFileSync(__dirname + '/certificate.pem');
https.createServer({ key: privateKey, cert: certificate }, app).listen(process.env.SERVER_PORT_HTTPS, function () {
console.log('Listening on port ' + process.env.SERVER_PORT_HTTPS);
});
} else {
app.listen(process.env.SERVER_PORT_HTTP, function () {
console.log('Listening on port ' + process.env.SERVER_PORT_HTTP);
});
}
GitHub Workflow
name: GHCR-DigitalOcean
on:
push:
branches:
- main
paths:
- "backend/**"
jobs:
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
docker kill $(docker ps -q -a)
docker system prune -a
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
docker pull ghcr.io/${{ github.repository }}:latest
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Related
Hello I have a typescript server with a build script that looks like
"`build": "rm -rf build && tsc && cp package*.json build && cp Dockerfile build && npm ci --prefix build --production"`
This creates a new build directory and copies the Dockerfile to the build directory, so the deployed application should be run on the build directory.
I want to automate deployment to Cloud Run using github workflows so I created a .yaml file but during the run portion I am confused how I can build the docker image and push it from my build directory
- name: Enable the necessary APIs and enable docker auth
run: |-
gcloud services enable containerregistry.googleapis.com
gcloud services enable run.googleapis.com
gcloud --quiet auth configure-docker
- name: Build and tag image
run: |-
docker build . --tag "gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA"
- name: Push image to GCR
run: |-
docker push gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
My question is how can I insure to run the docker commands from the build directory ?
On the docker build command, replace the . with build/.
Here's a a full reference of an example workflow including the step to deploy the image to Cloud Run.
on:
push:
branches:
- example-build-deploy
name: Build and Deploy a Container
env:
PROJECT_ID: ${{ secrets.GCP_PROJECT }}
SERVICE: hello-cloud-run
REGION: us-central1
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
project_id: ${{ env.PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true # Set to true to authenticate the Cloud Run action
- name: Authorize Docker push
run: gcloud auth configure-docker
- name: Build and Push Container
run: |-
docker build -t gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA build/
docker push gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
- name: Deploy to Cloud Run
id: deploy
uses: google-github-actions/deploy-cloudrun#v0
with:
service: ${{ env.SERVICE }}
image: gcr.io/$CLOUD_RUN_PROJECT_ID/$REPO_NAME:$GITHUB_SHA
region: ${{ env.REGION }}
- name: Show Output
run: echo ${{ steps.deploy.outputs.url }}
You may also check the full Github repository sample here.
I am trying to configure angular with docker image and run with docker-compose.
Here is my dockerfile
# Use official node image as the base image
FROM node:16-alpine3.12 as build
# Set the working directory
WORKDIR /usr/local/app
# Add the source code to app
COPY ./ ./
# Install all the dependencies
RUN npm install
# Generate the build of the application
RUN npm run build:prod
# Stage 2: Serve app with nginx server
# Use official nginx image as the base image
FROM nginx:latest
# Copy the build output to replace the default nginx contents.
COPY --from=build /usr/local/app/dist/ /usr/share/nginx/html
#COPY /nginx.conf /etc/nginx/conf.d/default.conf
# Assign permisson
# https://stackoverflow.com/questions/49254476/getting-forbidden-error-while-using-nginx-inside-docker
# RUN chown nginx:nginx /usr/share/nginx/html/*
# Expose port 80
EXPOSE 80
EXPOSE 443
It runs correctly if I run docker build and docker run locally. But, after push to github with yml below:
name: Docker Image CI For Web
on:
push:
tags:
- 'web-v*'
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout#v2
- name: Log in to Docker Hub
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: edward8520/meekou
- name: Build and push Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: ./angular/
file: ./angular/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
And then run docker-compose with below,
version: '3.9'
services:
meekou-web:
container_name: 'meekou-web-test'
image: 'edward8520/meekou:web-v0.0.4'
ports:
- "8888:80"
- "8889:443"
environment:
- NODE_ENV=production
volumes:
- ./meekou-web/:/usr/share/nginx/html
expose:
- "80"
- "443"
It throw error 403 Forbidden
I have a Nuxt Js application on my ubuntu server. I use my terminal to enter server and run commands on my server. I can't run command "npm run build" because my VPS has low memory and build command freezes.So I decided to make build on my PC, copy built folder to VPS and then run application.
What should I write in github actions to perform those steps?
-npm run build
-copy built folder from my machine to VPS using ssh(or password, it doesn't matter) to specific folder on VPS
What I did last time and it worked for me:
name: Deployment Setup
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
job-one:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Testing VPS connection and deploy project
uses: appleboy/ssh-action#master
with:
host: 114.12.587.105
port: 1234
username: new-user
key: ${{ secrets.PRIVATE_KEY}}
uses: appleboy/ssh-action#master
script: |
cd /home/kentforth/webapps/myapp
git pull
npm install --production
quasar build
sudo service nginx restart
EDIT:
Here is my deploy.yml file:
name: 'test my project'
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x, 12.x, 14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: npm run build
Here is What I get in github actions:
It seems github actions try to find path /home/runner/work/my project name
But I do not have such directory
How can specify I the folder on my local machine where "npm run build" command should run?
Created a yml action to deploy the react app to the S3 bucket
name: Production Build
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-s3-bucket: ${{ secrets.AWS_PRODUCTION_BUCKET_NAME }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
env:
JWT_SECRET: ${{ secrets.JWT_SECRET }}
REACT_APP_AMPLITUDE: ${{ secrets.REACT_APP_AMPLITUDE }}
- name: Build React App
run: npm install && npm run build
- name: Deploy app build to S3 bucket
run: aws s3 sync ./dist/ s3://firmconnect.io --acl public-read --follow-symlinks --delete --exclude '.git/*'
npm run build executes npm run client:build && npm run server:build in my package.json
npm run client:build executes "webpack --config webpack.client.build.config.js"
npm run server:build executes "webpack --config webpack.server.build.config.js"
I see both build.config.js files are creating a client and server folder, but this must not be the right way to build the production application because no index.html file is being created, nor any assets. How do you properly deploy a react application using webpack to Amazon S3?
In my CI I need to make sure that my code works with mongo and so I'm using the official mongo docker image and passing my desired credentials as environment variables for the mongo image.
build-on-mongo:
runs-on: ${{ matrix.os }}
env:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: vote-stuysu-org
services:
mongo:
image: mongo
ports:
# Maps tcp port 27017 on service container to the host
- 27017:27017
strategy:
matrix:
os: [ubuntu-20.04]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v1
with:
node-version: '14.x'
- name: Install dependencies
run: npm install
- name: Test
run: npm test
env:
MONGO_URL: mongodb://root:password#localhost:27017/vote-stuysu-org
However, in the test step, an error gets logged saying:
(node:2158) UnhandledPromiseRejectionWarning: MongoError: Authentication failed.
I'm using mongoose alongside nodejs and so this is the code responsible for authentication:
mongoose.connect(process.env.MONGO_URL, {
useUnifiedTopology: true,
useNewUrlParser: true
});
I don't think that my connection uri is wrong but I'm not sure why the authentication is failing.