Dockerfile to run .net core console app on Linux (which is not self contained) - linux

I have a .net Core 2.1 console application and I want to build it into a docker container that will be deployed to Linux (Alpine).
If this had an output on windows of an exe (and was self-contained of course). My docker file might look like this:
COPY /output/ /bin/
CMD ["/bin/sample.exe"]
As I want to make this a portable app (and have .net Core Runtime on the Linux box already), should my docker file look like this:
FROM microsoft/dotnet:2.1-runtime
COPY /output/ /bin/
CMD ["dotnet", "/bin/sample.dll"]
Thanks for any advice in advance!

Yes, Microsoft has a dotnet docker samples repository. Their Dockerfile looks like to following:
FROM microsoft/dotnet:2.1-runtime
COPY /output /bin/
ENTRYPOINT ["dotnet", "/bin/dotnetapp.dll"]
They also have a alpine based example of a Dockerfile
For more information on Entrypoint / CMD

Related

How to develop node.js apps with docker on Windows?

I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose

GitHub Actions: How to use Docker image from GitHub registry in a Docker action?

I recently created my first Docker container action, and it works perfectly as intended. The Dockerfile for this action uses python:3.9-slim. It looks like this:
FROM python:3.9-slim
COPY repo ./repo
COPY scripts ./scripts
COPY requirements.txt setup.py ./
COPY entrypoint.sh /entrypoint.sh
RUN python setup.py develop
ENTRYPOINT ["/entrypoint.sh"]
I'm now thinking of publishing the Docker image from the above Dockerfile on Docker/GitHub registry and using it in the Docker action. Essentially the idea is to simplify the image build process, so the action can run faster.
I have a couple of questions related to doing this, #1 is the question in the title:
I found this page which explains how to have a workflow to publish the Docker image. If I publish to GitHub registry, how can I go about using it? For images on Docker, that seems straightforward.
With this Docker container action, I want to make sure that it uses the version specified in uses: username/action-repo#v???. Thus, I think it would make sense to have the FROM in this new Dockerfile configured for the specific image tag that will be used. Is that the best way to go about it?

Cannot run an instance of a docker image on azure container registry

I have created a simple asp.net core web application using visual studio templates. I have then created a docker file which looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "WebApplication.dll"]
I have then built an image out of this using:
docker build -t webapplication:dev .
and then created and run a container from this using:
docker run -d -p 8080:80 --name myapp webapplication:dev
This all works locally. I have then tried pushing this out to azure container registry and its been pushed successfully. However when I try to run an instance of this container, I get an error in Azure saying "The supported Windows versions are: '10.0.14393,10.0.17763'"
I don't understand why I am getting this error? The image works locally (I can check by browsing to localhost:8080 and checking that I get a valid response back, which I do). Is this something to do with ACR? What is the workaround? Is this something to do with the version of my windows (I am on 10.0.18363 Build 18363) Is my system too new for this to work which seems unlikely? Anyone seen this?
For the Windows containers, the version of the OS on the host should match the version of the OS on the container. You should choose an appropriate base image to make your container being able to run on Azure.
https://samcogan.com/windows-containers-and-azure states:
OS Version
Due to limitations in how Windows implements the container run time, you require that the host machine is running the same version of Windows as in your container. I don't just mean the same family, but the same version. So if your host is running Server 2016 v1803, then your containers also need to run Server 2016 v1803, they cannot be running v1709. This is very problematic as it means rebuilding your images any time your host OS changes. It's even more problematic for those trying to use cloud providers, as often you won't know what OS is running on the machines you are hosted on.
You can combat this issue by running your containers as HyperV containers, which wraps your container in a thin virtual machine to abstract it from the OS, but this adds complexity.

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

How do I build an application with a docker image?

I have a docker image that I am trying to use to cross-compile an application for Windows. However, whenever I enter the docker image, it does not show my filesystem, so I cannot reach my source code.
How do I build with a docker image? Or am I missing something?
If I understand right, the image contains your development environment, and you only need a way for the container to see your code on the host machine at runtime. The answer is in the question then.
Just start your container with the source directory mounted:
docker run --rm -it -v%my_src_dir%:/workspace centos:6.6 /bin/sh
Then inside the container, you cd /workspace to continue development.

Resources