I have created a simple asp.net core web application using visual studio templates. I have then created a docker file which looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "WebApplication.dll"]
I have then built an image out of this using:
docker build -t webapplication:dev .
and then created and run a container from this using:
docker run -d -p 8080:80 --name myapp webapplication:dev
This all works locally. I have then tried pushing this out to azure container registry and its been pushed successfully. However when I try to run an instance of this container, I get an error in Azure saying "The supported Windows versions are: '10.0.14393,10.0.17763'"
I don't understand why I am getting this error? The image works locally (I can check by browsing to localhost:8080 and checking that I get a valid response back, which I do). Is this something to do with ACR? What is the workaround? Is this something to do with the version of my windows (I am on 10.0.18363 Build 18363) Is my system too new for this to work which seems unlikely? Anyone seen this?
For the Windows containers, the version of the OS on the host should match the version of the OS on the container. You should choose an appropriate base image to make your container being able to run on Azure.
https://samcogan.com/windows-containers-and-azure states:
OS Version
Due to limitations in how Windows implements the container run time, you require that the host machine is running the same version of Windows as in your container. I don't just mean the same family, but the same version. So if your host is running Server 2016 v1803, then your containers also need to run Server 2016 v1803, they cannot be running v1709. This is very problematic as it means rebuilding your images any time your host OS changes. It's even more problematic for those trying to use cloud providers, as often you won't know what OS is running on the machines you are hosted on.
You can combat this issue by running your containers as HyperV containers, which wraps your container in a thin virtual machine to abstract it from the OS, but this adds complexity.
Related
I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose
I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.
I have a .net Core 2.1 console application and I want to build it into a docker container that will be deployed to Linux (Alpine).
If this had an output on windows of an exe (and was self-contained of course). My docker file might look like this:
COPY /output/ /bin/
CMD ["/bin/sample.exe"]
As I want to make this a portable app (and have .net Core Runtime on the Linux box already), should my docker file look like this:
FROM microsoft/dotnet:2.1-runtime
COPY /output/ /bin/
CMD ["dotnet", "/bin/sample.dll"]
Thanks for any advice in advance!
Yes, Microsoft has a dotnet docker samples repository. Their Dockerfile looks like to following:
FROM microsoft/dotnet:2.1-runtime
COPY /output /bin/
ENTRYPOINT ["dotnet", "/bin/dotnetapp.dll"]
They also have a alpine based example of a Dockerfile
For more information on Entrypoint / CMD
I have a docker image that I am trying to use to cross-compile an application for Windows. However, whenever I enter the docker image, it does not show my filesystem, so I cannot reach my source code.
How do I build with a docker image? Or am I missing something?
If I understand right, the image contains your development environment, and you only need a way for the container to see your code on the host machine at runtime. The answer is in the question then.
Just start your container with the source directory mounted:
docker run --rm -it -v%my_src_dir%:/workspace centos:6.6 /bin/sh
Then inside the container, you cd /workspace to continue development.
I have this problem. It seems that MongoDB v3.4 can't be installed on 32-bits systems so not on my raspberry running Raspbian.
I tried to run a 64-bit image with Docker (is it possible?). But when I try to pull the official mongo docker repo: https://hub.docker.com/_/mongo/.
It says no matching manifest for linux/arm in the manifest list entries.
I also tried pulling custom rpi-mongodb images but they all run a 2.4 version of MongoDB... And my server can't run with this version
How can I run MongoDB v3.4 on my Raspberry Pi?
Since the Raspberry Pi's architecture is ARM, so only the images that were built for ARM architecture can be used on the RPI. And there is a very low number of those ARM images.
The only choice is to build a new image by ourself. Problem is we cannot do this by the regular way (Dockerfile: FROM another-than-ARM-arch-img) and build on our PC's arch machine. The main trick is to use a CI server (e.g. Travis) to build your Dockerfile (and we must Register QEMU in the build agent).
I have succeeded to build an OpenMediaVault docker image for RPI based on this tutorial.
The idea is
Looking for the Dockerfile of MongoDB 3.4 and adapt its content to our Dockerfile.
Create our Dockerfile to build ARM image for the RPI.
FROM resin/rpi-raspbian # Since resin is providing some useful arm arch images
# Your adapted content from
# MongoDB 3.4 Dockerfile
# ....
Create a .travis.yml as described in the said tutorial.
Go to your favorite CI service and link your git repo to it.
Let the CI Build, and push the image to docker hub
Pull the image from docker hub to your RPI.
Another solution
Is to build docker images from Resin.io. This solution has a drawback is that we cannot push the built image to docker hub and pull it everywhere else. I just let you the doc here since it would make my answer so loooong.
If an older version is OK (2.4.10)...
Clone this git repository onto your local raspberry pi (install git first) then run the docker build as per the readme on the web page to create the docker image, then create / start / run a docker container from that image:
Git repository for mongodb Dockerfile
Once the image is built and a container started from it you should be able to log on directly to the mongodb container and interact with the mongo client to issue commands that talk to the database, for example:
docker exec -i -t yourMongodbContainerName /bin/bash
Only problem found concerns stopping the container, which fails. Docker kills the container after the timeout (longer timeout same deal). This 'unclean shutdown' means a re-start of the container fails as mongodb complains about a lock file being in a bad state. You can see this in the logs:
docker logs yourMongodbContainerName
Failure can be managed by 1. ensuring no apps access the database before 2. stopping the mongodb container then 3. deleting the lock file in container at /data/db/ (typically mapped to docker host using -v because containers are obviously transient) finally 4. restart mongodb container as part of a re-deploy.
Version of mongodb is 2.4.10. I'm connecting via nodejs and 2.2.35 client drivers from npm were the latest I found that worked.