How do I build an application with a docker image? - linux

I have a docker image that I am trying to use to cross-compile an application for Windows. However, whenever I enter the docker image, it does not show my filesystem, so I cannot reach my source code.
How do I build with a docker image? Or am I missing something?

If I understand right, the image contains your development environment, and you only need a way for the container to see your code on the host machine at runtime. The answer is in the question then.
Just start your container with the source directory mounted:
docker run --rm -it -v%my_src_dir%:/workspace centos:6.6 /bin/sh
Then inside the container, you cd /workspace to continue development.

Related

Determine the source of read-only docker violation

I have a container with a complex, multi-threaded software.
I am now in a process of trying to make it run in read-only mode (with rw mounted /tmp directory).
The problem is that the software fails to start as it attempts to modify several files.
Is there a way to determine which files are being modified?
For example:
docker run --name alpine --read-only --rm --tty alpine
docker exec alpine touch hello.txt // access violation
I want to know that something inside the container tried to modify hello.txt.

GitHub Actions: How to use Docker image from GitHub registry in a Docker action?

I recently created my first Docker container action, and it works perfectly as intended. The Dockerfile for this action uses python:3.9-slim. It looks like this:
FROM python:3.9-slim
COPY repo ./repo
COPY scripts ./scripts
COPY requirements.txt setup.py ./
COPY entrypoint.sh /entrypoint.sh
RUN python setup.py develop
ENTRYPOINT ["/entrypoint.sh"]
I'm now thinking of publishing the Docker image from the above Dockerfile on Docker/GitHub registry and using it in the Docker action. Essentially the idea is to simplify the image build process, so the action can run faster.
I have a couple of questions related to doing this, #1 is the question in the title:
I found this page which explains how to have a workflow to publish the Docker image. If I publish to GitHub registry, how can I go about using it? For images on Docker, that seems straightforward.
With this Docker container action, I want to make sure that it uses the version specified in uses: username/action-repo#v???. Thus, I think it would make sense to have the FROM in this new Dockerfile configured for the specific image tag that will be used. Is that the best way to go about it?

Cannot run an instance of a docker image on azure container registry

I have created a simple asp.net core web application using visual studio templates. I have then created a docker file which looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "WebApplication.dll"]
I have then built an image out of this using:
docker build -t webapplication:dev .
and then created and run a container from this using:
docker run -d -p 8080:80 --name myapp webapplication:dev
This all works locally. I have then tried pushing this out to azure container registry and its been pushed successfully. However when I try to run an instance of this container, I get an error in Azure saying "The supported Windows versions are: '10.0.14393,10.0.17763'"
I don't understand why I am getting this error? The image works locally (I can check by browsing to localhost:8080 and checking that I get a valid response back, which I do). Is this something to do with ACR? What is the workaround? Is this something to do with the version of my windows (I am on 10.0.18363 Build 18363) Is my system too new for this to work which seems unlikely? Anyone seen this?
For the Windows containers, the version of the OS on the host should match the version of the OS on the container. You should choose an appropriate base image to make your container being able to run on Azure.
https://samcogan.com/windows-containers-and-azure states:
OS Version
Due to limitations in how Windows implements the container run time, you require that the host machine is running the same version of Windows as in your container. I don't just mean the same family, but the same version. So if your host is running Server 2016 v1803, then your containers also need to run Server 2016 v1803, they cannot be running v1709. This is very problematic as it means rebuilding your images any time your host OS changes. It's even more problematic for those trying to use cloud providers, as often you won't know what OS is running on the machines you are hosted on.
You can combat this issue by running your containers as HyperV containers, which wraps your container in a thin virtual machine to abstract it from the OS, but this adds complexity.

how docker works if the OS environment changes?

I am very new to docker. I have some very basic doubts on docker. suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created. Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above? because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
I know this is very simple question to ask, still any help is highly appreciable.Thanks.
suppose I am cloning a simple project from github. I want to create a docker image of that application. according to my OS(currently I am using linux), I am writing a docker file and building one image of that application, and from docker image I am creating a docker container. now my container is created.
Just to be sure that this is clear. You have to consider the "Docker Image" as "a recipe" and a "Docker Container" as "a cake". You can make as many cakes as you like with a given recipe. The recipe is what you share if you want to be able to re-bake cakes.
Now suppose my another team mate wants to deploy that docker image to other system of Windows/mac OS. now what are the procedures? means do I need to write the docker file again? or I need to pull the app from github again and following the same steps that I mentioned above?
And thus it's the "image" that you will "share" with other developers and not the container. This can be done either by "pushing" the image to an online repository (e.g. https://hub.docker.com/) or by recreating the image every time from a Dockerfile.
because in my dockerfile I have given commands according to my OS, but somebody wants to deploy the docker image in windows/mac.
I would have to see what exactly you are doing, but it's good practice to make docker images independent from the host. Or at least make it configurable during the first creation of the image or execution of the container.
To give a concrete example, in our company we have a private rest api written in PHP. Everything runs on docker, whether it is on development or on production. Our production images can be run on any OS, however our dev image will be built slightly differently depending on the OS. Why? Because we need to configure the debugger.
If the image is built on Linux, the php setting xdebug.remote_host needs to point to localhost, however when using Docker For Mac, then the php setting needs to be docker.for.mac.localhost .
The Dockerfile looks partially like this:
FROM adsdaq/print-engine-fpm:7.3
ARG DOCKER_HOST_ADDR
ENV PHP_XDEBUG_REMOTE_HOST ${DOCKER_HOST_ADDR:-localhost}
COPY etc/dev/php/adsdaq.ini $PHP_INI_DIR/conf.d/
And in the adsdaq.ini we have
xdebug.remote_host = ${PHP_XDEBUG_REMOTE_HOST}
And to simplify the life of our devs, we have a Makefile which handles OS detection:
DOCKER_HOST ?= localhost
OPEN_BROWSER ?= open
UNAME_S := $(shell uname -s)
USERID=$(shell id -u)
GROUPID=$(shell id -g)
## Define variable depending on OS used, use xdg-open command
ifeq ($(UNAME_S),Linux)
OPEN_BROWSER = xdg-open
else ifeq ($(UNAME_S),Darwin)
ifneq (,$(wildcard /var/run/docker.sock))
DOCKER_HOST = docker.for.mac.localhost
endif
else
$(warning Your OS "$(UNAME_S)" is not supported and could not work as expected!)
endif
As showed here, the image will be built differently on Linux than on Mac OS for dev purposes, and that is fine as we don't need to push those images on any repo manager.
If you need to share the image on a repo, then I would make sure that the configuration can be changed dynamically through parameters and/or eventually the usage of an entrypoint script.
And secondly, where is the image file located? in my local system it will not be there I know. how can I see the files/folders of the docker image?
You cannot see the files/folders of the docker image. To see what's in the image you need to run a container as it will give you an instance of it! Remember the cake/recipe analogy .. you cannot see the content of the cake until you baked it using the recipe.
You can however see all images "stored" on your machine by doing docker images .
Hope this helps figuring things out. Don't hesitate to share your Dockerfile if you need more assistance.
You don't need to change anything. Lets say your current host is Linux on which you are running docker and you made a dockerfile. Now lets say you went to another PC which is running on windows. If docker is running on this windows and you want to build/run or whatever using your image file then you can do that without changing anything.
For more detailed/specific answer you will have to post the dockerfile

Docker - accessing files inside container from host

I am new to docker.
I ran a node-10 images and inside the running container I cloned a repository, ran the app which started a server with file watcher. I need to access the codebase inside the container, open it up in an IDE running on the windows host. If that is done, then I also want that as I change the files in the IDE these changes induce the filewatcher in the container.
Any help is appreciated. Thanks,
The concept you are looking for is called volumes. You need to start a container and mount a host directory inside it. For the container, it will be a regular folder, and it will create files in it. For you, it will also be a regular folder. Changes made by either side will be visible to another.
docker run -v /a/local/dir:/a/dir/in/your/container
Note though that you can run into permission issues that you will need to figure out separately.
It depends on what you want to do with the files.
There is the docker cp command that you can use to copy files to/from a container.
However, it sounds to me like you are using docker for development, so you should mount a volume instead, that is, you mount a directory on the host as a volume in docker, so anything written to that directory will show up in the container, and vice versa.
For instance if you have the code base that you develop against in C:\src on your windows machine, then you run docker like docker run -v c:\src:/app where /app is the location that node is looking in. However, for Windows there are a few things to consider since Docker is not native in Windows, so have a look at the documentation first.
Hi I think you should use mount volumes for the source code and edit your code from your IDE normally:
docker run -it -v "$PWD":/app -w /app -u node node:10 yarn dev
here docker will create an image setting the working dir to "/app", mount the current dir to "/app" and run "yarn dev" at start up with the "node" user (none root user)
Hope this is helpfull.

Resources