how to view data an image in azure container register - azure

I am new in azure portal. I push a docker image to azure container resgistry and then I create an web app where I use that image. Now want to see all my data in the image or I need that data because every time my app execute it create some new file so I need these files or I want to view these files.
can someone please tell me how can I do that?`
This is my docker file.
FROM continuumio/miniconda3:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
curl
ARG NODE_VERSION=14.16.0
ARG NODE_PACKAGE=node-v$NODE_VERSION-linux-x64
ARG NODE_HOME=/opt/$NODE_PACKAGE
ENV NODE_PATH $NODE_HOME/lib/node_modules
ENV PATH $NODE_HOME/bin:$PATH
RUN curl https://nodejs.org/dist/v$NODE_VERSION/$NODE_PACKAGE.tar.gz | tar -xzC /opt/
COPY environment.yml .
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN conda install -c conda-forge librosa
WORKDIR /with
RUN mkdir /app1
COPY model /app1/model
COPY assets /app1/assets
COPY build /app1/build
COPY node_modules /app1/node_modules
RUN apt-get install -y openssh \
&& echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY sshd_config /etc/ssh/
EXPOSE 8080 2222
# CMD ["python", "/app1/model/main.py", "/app1/assets/uploads/file.txt"]
CMD ["node", "/app1/build/server.js"]

It seems you push your image into the ACR and then deploy it to the Web App. If you want to see the data inside the image, here you'd better call it container, you need to ssh into the running container in Web App. To do this, you need to make sure that the ssh was enabled in the image. If not, follow the steps here to enable it. If the ssh already enabled, then you can follow the steps here to ssh into the container to see the data as you want.

Related

Why is this container missing one file in its volume mount?

Title is the question.
I'm hosting many docker containers on a rather large linux ec2 instance. One container in particular needs access to a file that gets transferred to the host before run time. The file in question is copied from a windows file server to the ec2 instance using control-m.
When the container image runs, we give it -v to specify a volume mount with a path on the host to that transferred file.
The file is not found in the container. If I make a new file in the container, the new file appears on the host. When I make a file on the host, it appears in the container. When I make a copy of the transferred file using cp -p the copied file DOES show up in the container, but the original still does not.
I don't understand why this is? My suspicion is something to do with it being on a windows server before control-m copies it to the ec2 instance.
Details:
The file lives in the path /folder_path/project_name/resources/file.txt
Its permissions are -rwxrwxr-x 1 pyadmin pyadmin where pyadmin maps to the containers root user.
It's approximately 38mb in size and when I run file file.txt I get the output ASCII text, with CRLF line terminators.
The repo also has a resources folder with files already in it when it is cloned, but none of their names conflict.
Docker Version: 20.10.13
Dockerfile:
FROM python:3.9.11-buster
SHELL ["/bin/bash", "-c"]
WORKDIR /folder_path/project_name
RUN apt-get auto-clean && apt-get update && apt-get install -y unixodbc unixodbc-dev && apt-get upgrade -y
RUN python -m pip install --upgrade pip poetry
COPY . .
RUN python -m pip install --upgrade pip poetry && \
poetry config virtualenvs.create false && \
poetry install
ENTRYPOINT [ "python" ]
Command to start container:
docker run --pull always --rm \
-v /folder_path/project_name/logs:/folder_path/project_name/logs \
-v /folder_path/project_name/extracts:/folder_path/project_name/extracts \
-v /folder_path/project_name/input:/folder_path/project_name/input \
-v /folder_path/project_name/output:/folder_path/project_name/output \
-v /folder_path/project_name/resources:/folder_path/project_name/resources \
my-registry.com/folder_path/project_name:image_tag

Azure ACR Flask webapp deployment

I'm trying to deploy a Flask app on Azure by using the container registry. I used Az DevOps to pipeline building and pushing the image. Then I configure a web app to pull this image from ACR. But I'm running into one or the other error. The container is timing out and crashing and there are some errors in the application log as well. 'No module named flask'.
This is my Dockerfile
FROM python:3.7.11
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
WORKDIR /app
ADD . /app/
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python -m pip install --upgrade pip
RUN pip3 install Flask==2.0.2
RUN pip3 install --no-cache-dir -r /app/requirements.txt
RUN conda install python=3.7
RUN conda install -c rdkit rdkit -y
EXPOSE 5000
ENV NAME cheminformatics
CMD python app.py
I had to install miniconda because the rdkit package can be installed only from conda. I also add a PORT: 5000 key value to the configuration of the web app, but it hasn't loaded even once. I've been at this for 2 hours now. Previously, I've built images on local machine and pushed to dockerhub and ACR and was able to pull those images but it's the first time I used DevOps and I'm not sure what's going wrong here. Can someone please help?

Attempting to host a flutter project on Azure App Services using a docker image; local image and cloud image behave differently

I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]

Getting too many dangling images on docker-compose up

I want to integrate python in dotnet core image as I need to execute python scripts.
When I am executing this DockerFile, lots of dangling images are created.
Dangling Images
Also, is there any proper way to integrate a python interpreter? For example, I will get a URL in the .net core container and then I want to pass that URL to the python container. How can we achieve this task? I am new to Docker.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update && apt-get install -y --no-install-recommends wget
RUN apt-get update && apt-get install -y python3.7 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p tmp
WORKDIR /tmp
RUN wget https://github.com/projectdiscovery/subfinder/releases/download/v2.4.4/subfinder_2.4.4_linux_amd64.tar.gz
RUN tar -xvf subfinder_2.4.4_linux_amd64.tar.gz
RUN mv subfinder /usr/local/bin/
#Cleanup
#wget cleanup
RUN rm -f subfinder_2.4.4_linux_amd64.tar.gz
FROM base AS final
RUN mkdir app
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY Publish/ /app/
ENTRYPOINT ["dotnet", "OnePenTest.Api.dll"]
As Images are immutable, so any change you would do results in a new image is created because your compose file specifies the build command, so it will rerun the build command when you start the containers.
And if any files you're composing with a COPY change, then the current image cache is no longer used and it'll build a new image without erasing the old image.
Can use the below commands to build your file
sudo docker-compose build --force-rm [--build-arg key=val...] [SERVICE...]
docker build --rm -t <tag>
The option --force-rm removes intermediate containers after a successful build.
Just configure your docker-compose.yaml then running this with this commands:
$ docker compose down --remove-orphans # if needed to stop all running containers
$ docker compose build
$ docker compose up --no-build
Add the flag --no-build in docker compose up.
ref: https://docs.docker.com/engine/reference/commandline/compose_up/

Docker not extracting .tar.gz and cannot find the file in the image

I am having trouble with extracting a .tar.gz file and accessing its files on a docker image. I've tried looking around Stackoverflow, but the solutions didn't fix my problem... Below is my folder structure and my Dockerfile. I've made an image called modus.
Folder structure:
- modus
Dockerfile
ModusToolbox_2.1.0.1266-linux-install.tar.gz
Dockerfile:
FROM ubuntu:latest
USER root
RUN apt-get update -y && apt-get upgrade -y && apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz /root/
RUN cd /root/ && tar -C /root/ -zxzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
I've been running the commands below, but when I try to check /root/ the extracted files aren't there...
docker build .
docker run -it modus
root#e19d081664e4:/# cd root
root#e19d081664e4:/# ls
<prints nothing>
There should be a folder called ModusToolBox, but I can't find it anywhere. Any help is appreciated.
P.S
I have tried changing ADD to COPY, but both don't work.
You didn't provide a tag option with -t but you're using a tag in
docker run -it modus. By doing that you run some other modus image, not the one you have just built. Docker should say something
like Successfully built <IMAGE_ID> at the end of the build, run
docker run -it <IMAGE_ID> to run a newly built image if you don't want to provide a tag.

Resources