ASP.NET Unable to configure HTTPS endpoint - Docker & Linux - linux

I'm having an issue with my docker images, a month ago they worked well but I have done a minor change (HTML change, in a minor page) and try to rebuild a new docker image.
But when I deploy the docker image, I got the following error message:
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found or is out of date.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IEnumerable`1 listenOptions, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
Here is a summary of the setup
I tried to build the docker image manually, and also automatically with my CI/CD (AzureDevOps). But both are producing the same error.
I checked for any change in GIT history... nothing.
Here is the DockerFile I use
### >>> GLOBALS
ARG ENVIRONMENT="Production"
ARG PROJECT="SmartPixel.SoCloze.Web"
# debian buster - AMD64
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
ARG NUGET_CACHE=https://api.nuget.org/v3/index.json
ARG NUGET_FEED=https://api.nuget.org/v3/index.json
# Copy sources
COPY src/ /app/src
ADD common.props /app
WORKDIR /app
RUN apt-get update
RUN apt-get install curl
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN npm install /app/src/SmartPixel.Core.Blazor/
# Installs the required dependencies on top of the base image
# Publish a self-contained image
RUN apt-get update && apt-get install -y libgdiplus libc6-dev && dotnet dev-certs https --clean;\
dotnet dev-certs https && dotnet dev-certs https --trust;\
dotnet publish --self-contained --runtime linux-x64 -c Debug -o out src/${PROJECT};
# Execute
# Start a new image from aspnet runtime image
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS runtime
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
ENV ASPNETCORE_ENVIRONMENT=${ENVIRONMENT}
ENV ASPNETCORE_URLS="http://+:80;https://+:443;https://+:44390"
ENV PROJECT="${PROJECT}.dll"
# Make logs a volume for persistence
VOLUME /app/Logs
# App directory
WORKDIR /app
# Copy our build from the previous stage in /app
COPY --from=build /app/out ./
RUN apt-get update && apt-get install -y ffmpeg libgdiplus libc6-dev
# Ports
EXPOSE 80
EXPOSE 443
EXPOSE 44390
# Execute
ENTRYPOINT dotnet ${PROJECT}
What is strange is that old images (> 1-month-old) are all working, but not when I rebuild them.
Here is also the docker compose file:
version: '3.3'
services:
web:
image: registry.gitlab.com/mycorp/socloze.web:1.1.1040
volumes:
- keys-vol:/root/.aspnet
- logs-vol:/app/Logs
- sitemap-vol:/data/sitemap/
networks:
- haproxy-net
- socloze-net
configs:
-
source: socloze-web-conf
target: /app/appsettings.json
logging:
driver: json-file
deploy:
placement:
constraints:
- node.role == manager
networks:
haproxy-net:
external: true
socloze-net:
external: true
volumes:
keys-vol:
driver: local
driver_opts:
device: /data/socloze/web/keys
o: bind
type: none
logs-vol:
driver: local
driver_opts:
device: /data/socloze/web/logs
o: bind
type: none
sitemap-vol:
driver: local
driver_opts:
device: /data/sitemap
o: bind
type: none
configs:
socloze-web-conf:
external: true
What can be the cause if:
old images are working perfectly
new images are producing this error
no code change, no change in the 'DockerFile'
the OS is Debian, the Docker images system is Ubuntu
Do you have any idea? I'm searching for weeks about a solution!

You need to add a couple more environment variable and might also have to mount a certificate volume:
Environment Variables and their values:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
and volumes:
- ~/.aspnet/https:/https:ro
depending on how you plan to add these env variables and mount volumes like dokcer run or through docker-compose you will have to play with the adding double quotes to the parameter lists and right spots.

Related

How to connect to docker daemon using docker in docker without privilege mode

I'm new to BitBucket Piepline and trying to use it as GitLab Ci way.
Now I happen to face an issues where I was trying to build docker in docker container using dnd.
error during connect: Post "http://docker:2375/v1.24/auth": dial tcp: lookup docker on 100.100.2.136:53: no such host
The above error show and I did some research believe was from the docker daemon.
Yet the atlassin claim there were no intention to work on privilege mode thus I think of any other option.
bitbucker-pipelines.yml
definitions:
services:
docker: # can only be used with a self-hosted runner
image: docker:23.0.0-dind
pipelines:
default:
- step:
name: 'Login'
runs-on:
- 'self.hosted'
services:
- docker
script:
- echo $ACR_REGISTRY_PASSWORD | docker login -u $ACR_REGISTRY_USERNAME registry-intl.ap-southeast-1.aliyuncs.com --password-stdin
- step:
name: 'Build'
runs-on:
- 'self.hosted'
services:
- docker
script:
- docker build -t $ACR_REGISTRY:latest .
- docker tag $(docker images | awk '{print $1}' | awk 'NR==2') $ACR_REGISTRY:$CI_PIPELINE_ID
- docker push $ACR_REGISTRY:$CI_PIPELINE_ID
- step:
Dockerfile
FROM node:14.17.0
RUN mkdir /app
#working DIR
WORKDIR /app
# Copy Package Json File
COPY ["package.json","./"]
# Expose port 80
EXPOSE 80
# Install git
RUN npm install git
# Install Files
RUN npm install
# Copy the remaining sources code
COPY . .
# Run prisma db
RUN npx prisma db pull
# Run prisma client
RUN npm i #prisma/client
# Build
RUN npm run build
CMD [ "npm","run","dev","node","build/server.ts"]

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

Docker / docker-compose workflow: angular changes not being reflected

When I make changes to my app source code and rebuild my docker images, the changes are not being reflected in the updated containers. I have:
Checked that the changes are being pulled to the remote machine correctly
Cleared the browser cache and double checked with different browsers
Checked that the development build files are not being pulled onto the remote machine by mistake
Banged my head against a number of nearby walls
Every time I pull new code from the repo or make a local change, I do the following in order to do a fresh rebuild:
sudo docker ps -a
sudo docker rm <container-id>
sudo docker image prune -a
sudo docker-compose build --no-cache
sudo docker-compose up -d
But despite all that, the changes do not make it through - I simply dont know how it isn't working as the output during build appears to be taking the local files. Where can it be getting the old files from, cos I've checked and double checked that the local source has changed?
Docker-compose:
version: '3'
services:
angular:
build: angular
depends_on:
- nodejs
ports:
- "80:80"
- "443:443"
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/usr/share/nginx/html
- ./dhparam:/etc/ssl/certs
- ./nginx-conf/prod:/etc/nginx/conf.d
networks:
- app-net
nodejs:
build: nodejs
ports:
- "8080:8080"
volumes:
certbot-etc:
certbot-var:
web-root:
Angular dockerfile:
FROM node:14.2.0-alpine AS build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/
RUN apk update && apk add --no-cache bash git
RUN npm install
COPY . /app
RUN ng build --outputPath=./dist --configuration=production
### prod ###
FROM nginx:1.17.10-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I found it. I followed tutorial to get https to work, and thats where the named volumes came in. Its a two step process, and it needed all those named volumes for the first step, but the web-root volume is what was screwing things up; deleting that solved my problem. At least I understand docker volumes better now...

Azure Container generates error '503' reason 'site unavailable'

The issue
I have built a docker image on my local machine running windows 10
running docker-compose build, docker-compose up -dand docker-compse logs -f do generate expected results (no error)
the app runs correctly by running winpty docker container run -i -t -p 8000:8000 --rm altf1be.plotly.docker-compose:2019-12-17
I upload the docker image on a private Azure Container Registry
I deploy the web app based on the docker image Azure Portal > Container registry > Repositories > altf1be.plotly.docker-compose > v2019-12-17 > context-menu > deploy to web app
I run the web app and I get The service is unavailable
what is wrong with my method?
Thank you in advance for the time you will invest on this issue
docker-compose.yml
version: '3.7'
services:
twikey-plot_ly_service:
# container_name: altf1be.plotly.docker-container-name
build: .
image: altf1be.plotly.docker-compose:2019-12-17
command: gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
ports:
- 8000:8000
env_file: .env.staging
.env/staging
apiUrl=https://api.beta.alt-f1.be
authorizationUrl=/api/auth/authorization/code
serverUrl=https://dunningcashflow-api.alt-f1.be
transactionFeedUrl=/creditor/tx
api_token=ANICETOKEN
Dockerfile
# read the Dockerfile reference documentation
# https://docs.docker.com/engine/reference/builder
# build the docker
# docker build -t altf1be.plotly.docker-compose:2019-12-17.
# https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#use-a-docker-image-from-any-private-registry-optional
# Use the docker images used by Microsoft on Azure
FROM mcr.microsoft.com/oryx/python:3.7-20190712.5
LABEL Name=altf1.be/plotly Version=1.19.0
LABEL maintainer="abo+plotly_docker_staging#alt-f1.be"
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
# copy the code from the local drive to the docker
ADD . /code/
# non interactive front-end
ARG DEBIAN_FRONTEND=noninteractive
# update the software repository
ENV SSH_PASSWD 'root:!astrongpassword!'
RUN apt-get update && apt-get install -y \
apt-utils \
# enable SSH
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
RUN chmod u+x /code/init_container.sh
# update the python packages and libraries
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install -r requirements.txt
# copy sshd_config file. See https://man.openbsd.org/sshd_config
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222
ENV PORT 8000
ENV SSH_PORT 2222
# install dependencies
ENV ACCEPT_EULA=Y
ENV APPENGINE_INSTANCE_CLASS=F2
ENV apiUrl=https://api.beta.alt-f1.be
ENV serverUrl=https://dunningcashflow-api.alt-f1.be
ENV DOCKER_REGISTRY altf1be.azurecr.io
ENTRYPOINT ["/code/init_container.sh"]
/code/init_container.sh
gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
app/conf/gunicorn.conf.docker.staging.py
# -*- coding: utf-8 -*-
workers = 1
# print("workers: {}".format(workers))
bind = '0.0.0.0'
timeout = 600
log_level = "debug"
reload = True
print(
f"workers={workers} bind={bind} timeout={timeout} --log-level={log_level} --reload={reload}"
)
container settings
application settings
web app running - 'the service is unavailable'
Kudu - 'the service is unavailable'
Kudu - http pings on port 8000 (the app was not running)
ERROR - Container altf1be-plotly-docker_0_ee297002 for site altf1be-plotly-docker has exited, failing site start
ERROR - Container altf1be-plotly-docker_0_ee297002 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
What I can see is your use the wrong environment variable, it should be WEBSITES_PORT, you miss the s behind the WEBSITE. You can add it and try to deploy the image again.
And I think Azure Web App will not help you set the environment variables for you as you did in the docker-compose file with option env_file. So I suggest you create the image through command docker build and test it locally with setting the environment variables through -e. When the images run well, then you can push it to ACr and deploy it from the ACR also with the environment variables in Web App.
Or you can still use the docker-compose file to deploy your image in Web App, instead of the env_file and build with environment and image when the image is in ACR.

Docker x NodeJS - Issue with node_modules

I'm a web developer who currently is working on a next.js project (it's just a framework to SSR ReactJS). I'm using Docker config on this project and I discovered an issue when I add/remove dependencies. When I add a dependency, build my project and up it with docker-compose, my new dependency isn't added to my Docker image. I have to clean my docker system with docker system prune to reset everything then I could build and up my project. After that, my dependency is added to my Docker container.
I use Dockerfile to configure my image and different docker-compose files to set different configurations depending on my environments. Here is my configuration:
Dockerfile
FROM node:10.13.0-alpine
# SET environment variables
ENV NODE_VERSION 10.13.0
ENV YARN_VERSION 1.12.3
# Install Yarn
RUN apk add --no-cache --virtual .build-deps-yarn curl \
&& curl -fSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" \
&& tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
&& rm yarn-v$YARN_VERSION.tar.gz \
&& apk del .build-deps-yarn
# Create app directory
RUN mkdir /website
WORKDIR /website
ADD package*.json /website
# Install app dependencies
RUN yarn install
# Build source files
COPY . /website/
RUN yarn run build
docker-compose.yml (dev env)
version: "3"
services:
app:
container_name: website
build:
context: .
ports:
- "3000:3000"
- "3332:3332"
- "9229:9229"
volumes:
- /website/node_modules/
- .:/website
command: yarn run dev 0.0.0.0 3000
environment:
SERVER_URL: https://XXXXXXX.com
Here my commands to run my Docker environment:
docker-compose build --no-cache
docker-compose up
I suppose that something is wrong in my Docker's configuration but I can't catch it. Do you have an idea to help me?
Thanks!
Your volumes right now are not set up to do what you intend to do. The current set below means that you are overriding the contents of your website directory in the container with your local . directory.
volumes:
- /website/node_modules/
- .:/website
I'm sure your intention is to map your local directory into the container first, and then override node_modules with the original contents of the image's node_modules directory, i.e. /website/node_modules/.
Changing the order of your volumes like below should solve the issue.
volumes:
- .:/website
- /website/node_modules/
You are explicitly telling Docker you want this behavior. When you say:
volumes:
- /website/node_modules/
You are telling Docker you don't want to use the node_modules directory that's baked into the image. Instead, it should create an anonymous volume to hold the node_modules directory (which has some special behavior on its first use) and persist the data there, even if other characteristics like the underlying image change.
That means if you change your package.json and rebuild the image, Docker will keep using the volume version of your node_modules directory. (Similarly, the bind mount of .:/website means everything else in the last half of your Dockerfile is essentially ignored.)
I would remove the volumes: block in this setup to respect the program that's being built in the image. (I'd also suggest moving the command: to a CMD line in the Dockerfile.) Develop and test your application without using Docker, and build and deploy an image once it's essentially working, but not before.

Resources