We are trying to build container image for GCP cloud run service from base image of python:3.9-slim and we could see that there is a recent vulnerability for the expat package(CVE-2022-40674) which is stopping the whole CI/CD process.
we tried to upgrade the package using (RUN apt-get update && apt-get upgrade -y) as well but its pointing to the version 2.2.10 which still contains the vulnerability. I am assuming that the latest version 2.4.8-2 is in un-stable version so its not upgrading to it; is there any interim solution to solve this issue?
Error:
CRITICAL 9.8 projects/goog-vulnz/notes/CVE-2022-40674 expat 2.2.10
Related urls:
https://security-tracker.debian.org/tracker/source-package/expat
https://tracker.debian.org/pkg/expat
Dockerfile:
FROM python:3.9-slim
RUN apt-get update && apt-get upgrade -y
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
ARG APP_ENV
ENV APP_ENV=${APP_ENV}
RUN echo $APP_ENV
ENV APP_HOME /app
ENV PYTHONPATH /app
ENV CLOUD True
WORKDIR $APP_HOME
COPY . ./
Related
I created a CI/CD DevOps pipeline for deploying the Django app. After the deployment, manually I go to SSH in the azure app service to execute the below Linux dependencies
apt-get update && apt install -y libxrender1 libxext6
apt-get install -y libfontconfig1
After every deployment, this package is removed automatically. Is there any way to install these Linux dependencies permanently?
I suppose you are using Azure App Service Linux. Azure App Service Linux uses its own customized Docker image to host your application.
Unfortunately you cannot customize Azure Linux App Service Docker image, but you can use App Service Linux container with your own custom Docker image that includes your Linux dependencies.
https://github.com/Azure-Samples/docker-django-webapp-linux
Dockerfile example:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 8000
EXPOSE 27017
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt \
&& apt-get update && apt install -y libxrender1 libxext6 \
&& apt-get install -y libfontconfig1
WORKDIR /app
COPY . /app
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000
ENTRYPOINT ["init.sh"]
init.sh example:
#!/bin/bash
set -e
python /app/manage.py runserver 0.0.0.0:8000
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-containerize-deploy-python-web-app-azure-01
I'm trying to deploy a Flask app on Azure by using the container registry. I used Az DevOps to pipeline building and pushing the image. Then I configure a web app to pull this image from ACR. But I'm running into one or the other error. The container is timing out and crashing and there are some errors in the application log as well. 'No module named flask'.
This is my Dockerfile
FROM python:3.7.11
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
WORKDIR /app
ADD . /app/
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python -m pip install --upgrade pip
RUN pip3 install Flask==2.0.2
RUN pip3 install --no-cache-dir -r /app/requirements.txt
RUN conda install python=3.7
RUN conda install -c rdkit rdkit -y
EXPOSE 5000
ENV NAME cheminformatics
CMD python app.py
I had to install miniconda because the rdkit package can be installed only from conda. I also add a PORT: 5000 key value to the configuration of the web app, but it hasn't loaded even once. I've been at this for 2 hours now. Previously, I've built images on local machine and pushed to dockerhub and ACR and was able to pull those images but it's the first time I used DevOps and I'm not sure what's going wrong here. Can someone please help?
I am having trouble with azure and docker where my local machine image is behaving differently than the image I push to ACR. while trying to deploy to web, I get this error:
ERROR - failed to register layer: error processing tar file(exit status 1): Container ID 397546 cannot be mapped to a host IDErr: 0, Message: mapped to a host ID
So in trying to fix it, I have come to find out that azure has a limit on uid numbers of 65000. Easy enough, just change ownership of the affected files to root, right?
Not so. I put the following command into my Dockerfile:
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
Works great locally for changing the uids of the affected files from 397546 to 0. I do a command in the cli of the container:
find / -uid 397546
It finds none of the same files it found before. Yay! I even navigate to the directories where the affected files are, and do a quick
ls -n to double confirm they are fine, and sure enough the uids are now 0 on all of them. Good to go?
Next step, push to cloud. When I push and reset the app service, I still continue to get the same exact error above. I have confirmed on multiple fronts that it is indeed pushing the correct image to the cloud.
All of this means that somehow my local image and the cloud image are behaving differently.
I am stumped guys please help.
The Dockerfile is as below:
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 psmisc
RUN apt-get clean
# Clone the flutter repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Enable flutter web
RUN flutter upgrade
RUN flutter config --enable-web
# Run flutter doctor
RUN flutter doctor -v
# Change ownership to root of affected files
RUN chown -R root:root /usr/local/flutter/bin/cache/artifacts/gradle_wrapper/
# Copy the app files to the container
COPY ./build/web /usr/local/bin/app
COPY ./startup /usr/local/bin/app/server
COPY ./pubspec.yaml /usr/local/bin/app/pubspec.yaml
# Set the working directory to the app files within the container
WORKDIR /usr/local/bin/app
# Get App Dependencies
RUN flutter pub get
# Build the app for the web
# Document the exposed port
EXPOSE 4040
# Set the server startup script as executable
RUN ["chmod", "+x", "/usr/local/bin/app/server/server.sh"]
# Start the web server
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]```
So basically we have made a shell script to build web BEFORE building the docker image. we then use the static js from the build/web folder and host that on the server. No need to download all of flutter. Makes pipelines a little harder, but at least it works.
New Dockerfile:
FROM ubuntu:20.04 as build-env
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get -y install sudo
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select US" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/US select Colorado" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get install -y curl git wget unzip libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3 python3 nginx nano vim
RUN apt-get clean
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN cd /app/
# Configure nginx and remove secret files
RUN mv /app/build/web/ /var/www/html/patient
RUN cd /etc/nginx/sites-enabled
RUN cp -f /app/default /etc/nginx/sites-enabled/default
RUN cd /app/ && rm -r .dart_tool .vscode assets bin ios android google_place lib placepicker test .env .flutter-plugins .flutter-plugins-dependencies .gitignore .metadata analysis_options.yaml flutter_01.png pubspec.lock pubspec.yaml README.md
# Record the exposed port
EXPOSE 5000
# Start the python server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
I'm developing a new IoT Agent according to https://iotagent-node-lib.readthedocs.io/en/latest/howto/index.html
and I'm trying to run the following code: node index.js
but a warning shows up: (node:6176) [DEP0097] DeprecationWarning: Using a domain property in MakeCallback is deprecated. Use the async_context variant of MakeCallback or the AsyncResource class instead.
DEP0097 - Deprecation Warning (NODE.JS)
Any suggestions how to solve it?
thank you!
It appears that you are running your IoT Agent under Node 14. The Domain API has been deprecated and this is the cause of your warning. There is code within the IoT Agent Node lib currently uses the Domain API for communications with Telefonica Steelskin.
The IoT Agents are tested against LTS node releases, (currently 10 and 12 see here) and Node 14 doesn't hit LTS until October 2020, so I would expect an interim release should fix the issue.
In the meantime if you run your agent under an earlier version of Node - for example via Docker. An example can be found in the Custom IoT Agent tutorial Dockerfile
ARG NODE_VERSION=10.17.0-slim
FROM node:${NODE_VERSION}
COPY . /opt/iotXML/
WORKDIR /opt/iotXML
RUN \
apt-get update && \
apt-get install -y git && \
npm install pm2#3.2.2 -g && \
echo "INFO: npm install --production..." && \
npm install --production && \
# Remove Git and clean apt cache
apt-get clean && \
apt-get remove -y git && \
apt-get -y autoremove && \
chmod +x docker/entrypoint.sh
USER node
ENV NODE_ENV=production
# Expose 4041 for NORTH PORT, 7896 for HTTP PORT
EXPOSE ${IOTA_NORTH_PORT:-4041} ${IOTA_HTTP_PORT:-7896}
ENTRYPOINT ["docker/entrypoint.sh"]
CMD ["-- ", "config.js"]
The node version can be altered by adding NODE_VERSION to the Docker build command when building the Docker container.
The problem has to do with docker for windows toolbox. I installed linux and the custom iot agent worked. I think the problem ocurred because of the environment variables. For some reason windows 10 doesn't initiate .env archive variables that exist in the project.
I'm installing Ghostscript into a docker image and want to use it with ghostscript4js which requires for some functionality at least Ghostscript 9.21.
I'm using this in my docker file which installs Ghostscript 9.06
FROM node:7
ARG JOB_TOKEN
RUN apt-get update && \
apt-get install -y pdftk
ENV APP_DIR="/usr/src/app" \
JOB_TOKEN=${JOB_TOKEN} \
APP_DIR="/usr/src/app" \
GS4JS_HOME="/usr/lib"
COPY ./ ${APP_DIR}
# Step 1: Install App
# -------------------
WORKDIR ${APP_DIR}
# Step 2: Install Python, GhostScript and npm packages
# -------------------
ARG CACHE_DATE=2017-01-01
RUN \
apt-get update && \
apt-get install -y build-essential make gcc g++ python python-dev python-pip python-virtualenv && \
apt-get -y install ghostscript && apt-get clean && \
apt-get install -y libgs-dev && \
rm -rf /var/lib/apt/lists/*
RUN npm install
# Step 3: Start App
# -----------------
CMD ["npm", "run", "start"]
How do I install or upgrade to a higher Ghostscript version in a docker image?
Seems like the distro you are using (since you use apt-get) is only on 9.06. Not surprising, many distros remain behind the curve, especially long term support ones.
If you want to use a more recent version of Ghostscript, then you could nag the packager to update. And you know, 9.06 is 5 years old now.....
Failing that you'll have to build it yourself. Git clone the Ghostscript repository, cd ghostpdl, ./autogen.sh, make install. That of course gets the current bleeding edge source, for a release version you'll have to pull from one of the tags (we tag the source for each release).
Or build it yourself locally and put it somewhere that your docker image can retrieve it from.
IMO if you are going to use a version other than the one provided by the packager of your distro, you may as well use the current release. That's currently 9.22 and will be 9.23 in a few weeks.