in docker python cannot find app - python-3.x

When I run "docker-compose up" on Windows 10 docker client, I get the following errors:
counterapp_web_1 | File "manage.py", line 7, in <module>
counterapp_web_1 | from app import app
counterapp_db_1 | 2018-01-26T05:09:23.517693Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
counterapp_web_1 | ImportError: No module named 'app',
Here is my dockerfile:
FROM python:3.4.5-slim
## make a local directory
RUN mkdir /counter_app
ENV PATH=$PATH:/counter_app
ENV PYTHONPATH /counter_app
# set "counter_app" as the working directory from which CMD, RUN, ADD references
WORKDIR /counter_app
# now copy all the files in this directory to /counter_app
ADD . .
# pip install the local requirements.txt
RUN pip install -r requirements.txt
# Listen to port 5000 at runtime
EXPOSE 5000
# Define our command to be run when launching the container
CMD ["python", "manage.py", "runserver"]
There is a manage.py to call folder app, under app, there is __init__.py.

Double-check is this error on docker run is because of your directory name:
__init__.py is imported using a directory. if you want to import it as app you should put __init__.py file in directory named app
a better option is just to rename __init__.py to app.py
So a folder named app, not counter_app.

Related

How to import modules into a file in docker-compose

Currently, I am getting this error when I run my server in docker:
import {dataSchema} from "./data-model.js"
> ^^^^^^^^^^
SyntaxError: The requested module './data-model.js' does not provide an export named 'dataSchema'
Despite having exported it like so:
module.exports = {
dataSchema,
}
And importing like so:
import {dataSchema} from "./data-model.js"
My dockerfile looks like this:
FROM node:12
WORKDIR /usr/src/server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
The file that imports the dataSchema is on the same directory level as the file that exports the dataSchema. I can not use the CJS syntax for this.
Currently, I'm trying to console.log() the dataSchema, I'm assuming this isn't working due to problems with my dockerfile. The next thing I suspect, is that I'm supposed to copy the data-model.js file, however I don't see why the COPY . . in my dockerfile wouldn't already do that.

Not able to use terraform command on OpenFaas Python 3 Function deployment

I am using a Openfaas Python3 function with terraform to create bucket in my AWS Account. I am trying it locally and created a local cluster using k3s and installed openfaas cli.
Steps:
Created local cluster and deployed faas-netes.
Created a new project using openfaas template of python3.
Exposed the port.
Changed my project yaml file with new host on gateway key in provider.
I am creating my tf files with python only and I am sure that the files are created with proper info as checked it by executing ls and printing the file data after closing the file.
Checked if terraform is installed in image and live by executing terraform--version.
I have already changes my dockerfile to have terraform installed while building image.
OS: Ubuntu 20.04 Docker: 20.10.15 Faas-CLI: 0.14.2
Docker File
FROM --platform=${TARGETPLATFORM:-linux/amd64} ghcr.io/openfaas/classic-watchdog:0.2.0 as watchdog
FROM --platform=${TARGETPLATFORM:-linux/amd64} python:3-alpine
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Allows you to add additional packages via build-arg
ARG ADDITIONAL_PACKAGE
COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
RUN chmod +x /usr/bin/fwatchdog
RUN apk --no-cache add ca-certificates ${ADDITIONAL_PACKAGE}
RUN apk add --no-cache wget
RUN apk add --no-cache unzip
RUN wget https://releases.hashicorp.com/terraform/1.1.9/terraform_1.1.9_linux_amd64.zip
RUN unzip terraform_1.1.9_linux_amd64.zip
RUN mv terraform /usr/bin/
RUN rm terraform_1.1.9_linux_amd64.zip
# Add non root user
RUN addgroup -S app && adduser app -S -G app
WORKDIR /home/app/
COPY index.py .
COPY requirements.txt .
RUN chown -R app /home/app && \
mkdir -p /home/app/python && chown -R app /home/app
USER app
ENV PATH=$PATH:/home/app/.local/bin:/home/app/python/bin/
ENV PYTHONPATH=$PYTHONPATH:/home/app/python
RUN pip install -r requirements.txt --target=/home/app/python
RUN mkdir -p function
RUN touch ./function/__init__.py
WORKDIR /home/app/function/
COPY function/requirements.txt .
RUN pip install -r requirements.txt --target=/home/app/python
WORKDIR /home/app/
USER root
COPY function function
# Allow any user-id for OpenShift users.
RUN chown -R app:app ./ && \
chmod -R 777 /home/app/python
USER app
ENV fprocess="python3 index.py"
EXPOSE 8080
HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
CMD ["fwatchdog"]
Handler.py
import os
import subprocess
def handle(req):
"""handle a request to the function
Args:
req (str): request body
"""
print(os.system("terraform --version"))
with open('terraform-local/bucket.tf','w') as resourcefile:
resourcefile.write('resource "aws_s3_bucket" "ABHINAV" { bucket = "new"}')
resourcefile.close()
with open('terraform-local/main.tf','w') as mainfile:
mainfile.write('provider "aws" {' + '\n')
mainfile.write('access_key = ""' + '\n')
mainfile.write('secret_key = ""' + '\n')
mainfile.write('region = "ap-south-1"'+ '\n')
mainfile.write('}')
mainfile.close()
print(os.system("ls"))
os.system("terraform init")
os.system("terraform plan")
os.system("terraform apply -auto-approve")
return req
But still not able to use terraform commands like terraform init and create bucket on AWS.

Deploying Docker Container Registry on Azure App Service Issue

I am unable to rum Docker Container Registry on the Azure App service. I have a flask app and the following is the Dockerfile of it:-
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy project
WORKDIR /usr/src/app
COPY . /usr/src/app/
# expose port 80
EXPOSE 80
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:80", "app:app"]
I have deployed the docker image on the Container Registry. I have also set WEBSITES_PORT to 80 under App Service -> Application Settings.
Even after doing that, I get the following error:-
ERROR - Container XYZ didn't respond to HTTP pings on port: 80, failing site start.
I have tried running it locally and it works fine. But, it just does not seem to work on the Azure App service. Any help is highly appreciated.
I don't see an issue in the code you posted but to verify, here is a configuration for a Flask app with a Gunicorn server that works on a containerized Azure App Service:
app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
return "<p>Hello World!</p>"
Dockerfile
FROM python:3.8-slim-buster
ADD app.py app.py
ADD requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
EXPOSE 80
CMD ["gunicorn", "--bind=0.0.0.0:80", "app:app"]
requirements.txt
flask
gunicorn
I assume you selected "Docker Container" when you created the Azure App Service?
And then simply chose your image?

Permission denied when writing logs in Docker

I was trying to write a Docker log file on Ubuntu 20.04 by
sudo docker logs CONTAINER_ID >output.log
But it returned
-bash: output.log: Permission denied
How to solve the permission problem to save the logs? Is the problem inside the container or outside of it?
P.S. I had this container by docker run -d -v ~/desktop/usercode/Docker:/code -p 5000:5000 flask_app:1.0, and the Dockerfile is as below:
## Base Python Image for App
FROM python:3.9-rc-buster
# Setting up Docker environment
# Setting Work directory for RUN CMD commands
WORKDIR /code
# Export env variables.
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
###
#Copy requirements file from current directory to file in
#containers code directory we have just created.
COPY requirements.txt requirements.txt
#Run and install all required modules in container
RUN pip3 install -r requirements.txt
#Copy current directory files to containers code directory
COPY . .
#RUN app.
CMD ["flask", "run"]
And, the images are:
REPOSITORY TAG IMAGE ID CREATED SIZE
flask_app 1.0 90b2840f4d5d 29 minutes ago 895MB
python 3.9-rc-buster 50625b35cf42 9 months ago 884MB
The command you entered first creates output.log file in the same direction as you are, then drops the logs in that file; It seems that the problem is solved if you use the following command.
docker logs CONTAINER_ID > ~/output.log
This command creates a log file in the root path of the user you are. for example if your username is USER1 that file create at /home/USER1

Cannot dockerize app with docker compose with secrets says file not found but file exists

I am trying to dockerize an API which uses firebase, the credentials file is proving to be difficult to dockerize, I'll be deploying using docker-compose, my files are:
docker-compose:
version: "3.7"
services:
api:
restart: always
build: .
secrets:
- source: google_creds
target: auth_file
env_file: auth.env
ports:
- 1234:8990
secrets:
google_creds:
file: key.json
the key.json is the private key file
The Dockerfile looks like:
FROM alpine
# Install the required packages
RUN apk add --update git go musl-dev
# Install the required dependencies
RUN go get github.com/gorilla/mux
RUN go get golang.org/x/crypto/sha3
RUN go get github.com/lib/pq
RUN go get firebase.google.com/go
# Setup the proper workdir
WORKDIR /root/go/src/secure-notes-api
# Copy indivisual files at the end to leverage caching
COPY ./LICENSE ./
COPY ./README.md ./
COPY ./*.go ./
COPY db db
RUN go build
#Executable command needs to be static
CMD ["/root/go/src/secure-notes-api/secure-notes-api"]
I've set the GOOGLE_APPLICATION_CREDENTIALS env from my auth.env to: /run/secrets/auth_file
The program panics with:
panic: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open "/run/secrets/auth_file": no such file or directory
I've tried:
Mounting a volume to a path and setting the env var to that, results in the same
Copying the key to docker image (out of desperation), resulted in the same
Overriding start command to cat the secret file - this worked, i could see the entire file being outputted
Curiously enough, if I mount a volume, shell into it and execute the binary manually, it works perfectly well.

Resources