I want to integrate python in dotnet core image as I need to execute python scripts.
When I am executing this DockerFile, lots of dangling images are created.
Dangling Images
Also, is there any proper way to integrate a python interpreter? For example, I will get a URL in the .net core container and then I want to pass that URL to the python container. How can we achieve this task? I am new to Docker.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update && apt-get install -y --no-install-recommends wget
RUN apt-get update && apt-get install -y python3.7 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p tmp
WORKDIR /tmp
RUN wget https://github.com/projectdiscovery/subfinder/releases/download/v2.4.4/subfinder_2.4.4_linux_amd64.tar.gz
RUN tar -xvf subfinder_2.4.4_linux_amd64.tar.gz
RUN mv subfinder /usr/local/bin/
#Cleanup
#wget cleanup
RUN rm -f subfinder_2.4.4_linux_amd64.tar.gz
FROM base AS final
RUN mkdir app
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY Publish/ /app/
ENTRYPOINT ["dotnet", "OnePenTest.Api.dll"]
As Images are immutable, so any change you would do results in a new image is created because your compose file specifies the build command, so it will rerun the build command when you start the containers.
And if any files you're composing with a COPY change, then the current image cache is no longer used and it'll build a new image without erasing the old image.
Can use the below commands to build your file
sudo docker-compose build --force-rm [--build-arg key=val...] [SERVICE...]
docker build --rm -t <tag>
The option --force-rm removes intermediate containers after a successful build.
Just configure your docker-compose.yaml then running this with this commands:
$ docker compose down --remove-orphans # if needed to stop all running containers
$ docker compose build
$ docker compose up --no-build
Add the flag --no-build in docker compose up.
ref: https://docs.docker.com/engine/reference/commandline/compose_up/
Related
I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
I'm trying to build a docker container that runs a Python script. I want the code to be cloned from git when I build the image. I'm using this docker file as a base and added the following BEFORE the first line:
FROM debian:buster-slim AS intermediate
RUN apt-get update
RUN apt-get install -y git
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan [git hostname] >> /root/.ssh/known_hosts
RUN git clone git#...../myApp.git
... then added the following directly after the first line:
# Copy only the repo from the intermediate image
COPY --from=intermediate /myApp /myApp
... then at the end I added this to install some dependencies:
RUN set -ex; \
apt-get update; \
apt-get install -y gcc g++ unixodbc-dev libpq-dev; \
\
pip install pyodbc; \
pip install paramiko; \
pip install psycopg2
And I changed the command to run to:
CMD ["python3 /myApp/main.py"]
If, at the end of the dockerfile before the CMD, I add the command "RUN ls -l /myApp" it lists all the files I would expect during the build. But when I use "docker run" to run the image, it gives me the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "python3 /myApp/main.py": stat python3 /myApp/main.py: no such file or directory": unknown.
My build command is:
docker build --file ./Dockerfile --tag my_app --build-arg SSH_PRIVATE_KEY="$(cat sshkey)" .
Then run with docker run my_app
There is probably some docker fundamental that I am misunderstanding, but I can't seem to figure out what it is.
This is hard to answer without your command line or your docker-compose.yml (if any). A recurrent mistake is to map a volume from the host into the container at a non empty location, in this case, your container files are hidden by the content of the host folder.
The last CMD should be like this:
CMD ["python3", "/myApp/main.py"]
I am having trouble with extracting a .tar.gz file and accessing its files on a docker image. I've tried looking around Stackoverflow, but the solutions didn't fix my problem... Below is my folder structure and my Dockerfile. I've made an image called modus.
Folder structure:
- modus
Dockerfile
ModusToolbox_2.1.0.1266-linux-install.tar.gz
Dockerfile:
FROM ubuntu:latest
USER root
RUN apt-get update -y && apt-get upgrade -y && apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz /root/
RUN cd /root/ && tar -C /root/ -zxzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
I've been running the commands below, but when I try to check /root/ the extracted files aren't there...
docker build .
docker run -it modus
root#e19d081664e4:/# cd root
root#e19d081664e4:/# ls
<prints nothing>
There should be a folder called ModusToolBox, but I can't find it anywhere. Any help is appreciated.
P.S
I have tried changing ADD to COPY, but both don't work.
You didn't provide a tag option with -t but you're using a tag in
docker run -it modus. By doing that you run some other modus image, not the one you have just built. Docker should say something
like Successfully built <IMAGE_ID> at the end of the build, run
docker run -it <IMAGE_ID> to run a newly built image if you don't want to provide a tag.
I am having a problem with building a ASP.NET CORE app with Angular and Node.js on my Mac using Docker and Visual Studio.
This is the current Error:
> /Applications/Visual
> Studio.app/Contents/Resources/lib/monodevelop/AddIns/docker/MonoDevelop.Docker/MSbuild/Sdks/Microsoft.Docker.Sdk/build/Microsoft.VisualStudio.Docker.Compose.targets(5,5):
> Error: Building fjord.karve.experiment.server No build stage in current
> context.
>
> For more troubleshooting information, go to
> http://aka.ms/DockerToolsTroubleshooting (docker-compose)
Here is my current Docker File Attempt:
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY Fjord.sln ./
COPY fjord.karve.experiment.server/fjord.karve.experiment.server/Fjord.Karve.Experiment.Server.csproj fjord.karve.experiment.server/fjord.karve.experiment.server/
COPY /Users/dan/Projects/Fjord/fjord.karve.experiment.server/fjord.karve.experiment.server/nuget.config fjord.karve.experiment.server/fjord.karve.experiment.server/
COPY Fjord.Domain/Fjord.Domain.csproj Fjord.Domain/
COPY Fjord.Karve.Command/Fjord.Karve.Command.csproj Fjord.Karve.Command/
COPY fjord.karve.experiment.server/DAL/Fjord.DAL.csproj fjord.karve.experiment.server/DAL/
COPY /Users/dev/Projects/Fjord/fjord.karve.experiment.server/DAL/nuget.config fjord.karve.experiment.server/DAL/
COPY Fjord.Karve.Query/Fjord.Karve.Query.csproj Fjord.Karve.Query/
RUN dotnet restore -nowarn:msb3202,nu1503
COPY . .
WORKDIR /src/fjord.karve.experiment.server/fjord.karve.experiment.server
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
dotnet build -c Release -o /app
FROM build AS publish
RUN dotnet publish -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Fjord.Karve.experiment.Server.dll"]
I should probably mention that I attempted to get back to the drawing board a few minutes early with the following commands. Which may have caused some kind of unexpected issues.
docker images
docker rmi $(docker images -a -q)
The documentation says that the first FROM may only be preceded by ARG instructions.
I have a nodejs app that connects to a blockchain on the same server. Normally I use 127.0.0.1 + the port number (each chain gets a different port).
I decided to put the chain and the app in the same container, so that the frontend developers don't have to bother with setting up the chain.
However, When I build the image the chain should start. When I run the image it isn't. Furthermore, when I do go in the container and try to run it manually it says "besluitChain2#xxx.xx.x.2:PORT". So I thought instead of 127.0.0.1 I needed to connect to the port on 127.0.0.2, but that doesn't seem to work.
I'm sure connecting like this isn't new, and should work the same with a database. Can anyone help? The first piece of advice would be how to debug these images, because I have no idea where it goes wrong.
here is my dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
ADD workfolder/app /root/applications/app
ADD .multichain /root/.multichain
RUN npm install \
&& apt-get upgrade -q -y \
&& apt-get dist-upgrade -q -y \
&& apt-get install -q -y wget curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& cd /tmp \
&& wget http://www.multichain.com/download/multichain-1.0-beta-1.tar.gz \
&& tar -xvzf multichain-1.0-beta-1.tar.gz \
&& cd multichain-1.0-beta-1 \
&& mv multichaind multichain-cli multichain-util /usr/local/bin \
&& cd /tmp \
&& rm -Rf multichain*
RUN multichaind Chain -daemon
RUN cd /root/applications/app && npm install
CMD cd /root/applications/app && npm start
EXPOSE 8080
btw due to policies I can only connect to the server at port 80 to check if it works. When I run the docker image I can go to my /api-docs but not to any of the endpoints where I start interacting with the blockchain.
I decided to put the chain and the app in the same container
That was a mistake, I think.
Docker is not a virtual machine. It's a virtual application or process instance.
A Docker container runs a linux distro under the hood, but this is a detail that should be ignored when thinking about the purpose of Docker.
You should think of a Docker container as a single application process, not as a full virtual machine to run generally run multiple processes. This is evidenced by the way Docker will shut the container down once the main process shuts down (the process with PID 1).
I've got a longer post about this, here: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
Additionally, the RUN multichaind instruction in your dockerfile doesn't run the chain in your image / container. It tells the image to run this instruction during the build process.
A Dockerfile is a list of instructions for building an image. The wording here is important. An image is not executed, it is built. An image is a static, immutable template from which a Container is executed.
RUN multichaind Chain -daemon
By putting this RUN instruction in your image, you are temporarily starting the chain, but it is immediately halted (forcefully) when the image layer is done building. It will not remain running, because an image is not executed, it is built.
My advice is to put the chain in a separate image.
You'll have one image for the chain, and one for the node.js app.
You can use docker-compose to make it easier to run containers from both of these at the same time. Or you can run containers manually from them. Either way, you need two images.