I wrote a python flask API that takes an image as an input, uploads it on an S3 bucket and then process it in a function. When I run it locally, it works just fine. But when I run it through Docker image, it gives botocore.exceptions.NoCredentialsError: Unable to locate credentials.
My python code:
import boto3
s3BucketName = "textract_bucket"
region_name = 'ap-southeast-1'
aws_access_key_id = 'advdsav',
aws_secret_access_key = 'sfvsdvsdvsdvvfbdf'
session = boto3.Session(
aws_access_key_id = aws_access_key_id,
aws_secret_access_key = aws_secret_access_key,
)
s3 = session.resource('s3')
# Amazon Textract client
textractmodule = boto3.client('textract', region_name = region_name)
def extract_text(doc_name):
response = textractmodule.detect_document_text(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': doc_name,
}
})
extracted_items = []
for item in response["Blocks"]:
if item["BlockType"] == "LINE":
extracted_items.append(item["Text"])
return extracted_items
The flask API:
#app.route('/text_extract', methods = ['GET', 'POST'])
def upload_file():
if request.method == 'POST':
img = request.files['file']
file = secure_filename(img.filename)
bucket.Object(file).put(Body=img)
output = extract_text(file)
return {'results': output}
app.run(host="0.0.0.0")
Dockerfile:
FROM python:3.7
RUN apt update
RUN apt install -y libgl1-mesa-glx
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
The docker commands that I ran are:
docker build -t text_extract .
and then: docker run -p 5000:5000 text_extract
When I run the API and do a post request, I get the botocore.exceptions.NoCredentialsError error.
How can I fix this? Thanks
Related
i need a help.
I try send email using rails and default mail service. In developering all ok, but after dockerize project i get error: "wrong authentication type 'plain'".
------------------------ My docker file ------------------------
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN gem update bundler
RUN bundle install
COPY . .
ENV RAILS_ENV production
EXPOSE 3000
CMD rails server -b 0.0.0.0 -p 3000
------------------------ My .env file ------------------------
SMTP_ADDRESS='smtp.gmail.com'
SMTP_PORT=587
SMTP_AUTHENTICATION='plain'
SMTP_USER_NAME='login'
SMTP_PASSWORD='password'
DATABASE_NAME='dbname'
DATABASE_USERNAME='dbuser'
DATABASE_PASSWORD='dbpassword'
DATABASE_PORT=5432
DATABASE_HOST='host.docker.internal'
------------------------ My production.rb file ------------------------
config.action_mailer.delivery_method = :smtp
host = 'example.com' #replace with your own url
config.action_mailer.default_url_options = { host: host }
config.action_mailer.perform_caching = false
config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => ENV['SMTP_ADDRESS'],
:port => ENV['SMTP_PORT'],
:authentication => ENV['SMTP_AUTHENTICATION'],
:user_name => ENV['SMTP_USER_NAME'],
:password => ENV['SMTP_PASSWORD'],
:enable_starttls_auto => true,
:openssl_verify_mode => 'none' #Use this because ssl is activated but we have no certificate installed. So clients need to confirm to use the untrusted url.
}
I think maybe you need to pass the ENV variables into the Dockerfile? Or if you have a docker-compose file, pass it there
I'm using in docker container on windows 10 with nodejs. When I try to get data from oracle database - get request (the connection to data base in nodejs code) I get the message:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
When I make a get request without the container(run server) the data was return well.
Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json app.js ./
RUN npm install
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
connection to oracle:
async function send2db(sql_command, res) {
console.log("IN");
console.log(sql_command);
try {
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
});
console.log("Connection pool started");
const result = await executeSQLCommand(sql_command
// { outFormat: oracledb.OUT_FORMAT_OBJECT }
);
return result;
} catch (err) {
// console.log("init() error: " + err.message);
throw err;
}
}
From Docker for Oracle Database Applications in Node.js and Python here is one solution:
FROM node:12-buster-slim
WORKDIR /opt/oracle
RUN apt-get update && \
apt-get install -y libaio1 unzip wget
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You would want to use a later Node.js version now. The referenced link shows installs on other platforms too.
In ubuntu 20.04 LTS, I am getting this error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='docker.for.linux.localhost', port=8000): Max retries exceeded with url: /api/user
Actually I have made two docker images for flask and django.
In flask app,
my main.py:
from dataclasses import dataclass
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
from sqlalchemy import UniqueConstraint
import requests
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = 'mysql://root:root#db/main'
CORS(app)
db = SQLAlchemy(app)
#dataclass
class Product(db.Model):
id: int
title: str
image: str
id = db.Column(db.Integer, primary_key=True, autoincrement=False)
title = db.Column(db.String(200))
image = db.Column(db.String(200))
#dataclass
class ProductUser(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer)
product_id = db.Column(db.Integer)
UniqueConstraint('user_id', 'product_id', name='user_product_unique')
#app.route('/api/products')
def index():
return jsonify(Product.query.all())
#app.route('/api/products/<int:id>/like', methods=['POST'])
def like(id):
req = requests.get('http://docker.for.linux.localhost:8000/api/user')
return jsonify(req.json())
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
my Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install -r requirements.txt
COPY . /app
my docker-compose.yaml file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: 'python3 -u consumer.py'
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
when I am entering the url in postman
http://localhost:8001/api/products/1/like
by post method
I am getting the above error mentioned in the question
Can anyone tell how to fix this solution?
I am using Windows 10 for me http://docker.for.win.localhost:8000/api/user worked. you can try using lin or check and confirm the Linux abbreviation for this.
req = requests.get('http://docker.for.win.localhost:8000/api/user')
json = req.json()
I am using Ubuntu 22.04 LTS, and for me http://172.17.0.1:8000/api/user worked.
To get 172.17.0.1, I ran ifconfig in Terminal and looked for docker0.
I want to send post request from one container to another, both are flask apps.
When i push send button in form my request cant be sended with error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=5000): Max
retries exceeded with url: /users/ (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe0e5738ac0>: Failed to
establish a new connection: [Errno 111] Connection refused'))
Im trying to run it just on localhost. When i use docker-compose up, everything is ok, until im trying send request.
app 1 code (app, that im trying to send request from):
from settings import app, db
from flask import jsonify, request
from models import User
#app.route('/users/', methods=['POST'])
def users_post():
if request.json:
new_user = User(
email=request.json['email'],
first_name=request.json['first_name'],
last_name=request.json['last_name'],
password=request.json['password'])
db.session.add(new_user)
db.session.commit()
return jsonify({'msg': 'user succesfully added'})
else:
return jsonify({'msg': 'request should be in json format'})
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
dockerfile container 1:
FROM python:3
COPY . ./app
WORKDIR /app
RUN pip3 install -r requirements.txt
EXPOSE 5000 5050
CMD ["python3", "app.py"]
app 2 code:
#app.route('/', methods=['GET', 'POST'])
def users_get():
if request.method == 'POST':
request.form['email']
data = {
'email':request.form['email'],
'first_name':request.form['first_name'],
'last_name':request.form['last_name'],
'password':request.form['password']
}
r = requests.post('http://0.0.0.0:5000/users/', data=data)
print(r.text)
return render_template('index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5050)
dockerfile in app 2 is similar to first one.
docker-compose
version: '3'
services:
web:
build: ./core
command: python3 app.py
volumes:
- .:/core
ports:
- "5000:5000"
links:
- new_app
new_app:
build: ./new_app
command: python3 app.py
volumes:
- .:/new_app
ports:
- "5050:5050"
What did i missed?
app1 is missing the port, you should add it:
app.run(debug=True, host='0.0.0.0', port=5000)
when calling to app1 from app2 you should use its host instead of 0.0.0.0:
r = requests.post('http://web:5000/users/', data=data)
I am trying to set up the docker services following the link https://testdriven.io/courses/microservices-with-docker-flask-and-react/part-one-postgres-setup/
The code I already have:
Code structure
docker-compose-dev.yml
services/
users/
manage.py
Dockerfile-dev
entrypoint.sh
project/
__init__.py
config.py
db/
create.sql
Dockerfile
docker-compose.yml
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_APP=project/__init__.py
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#users-db:5432/users_dev # new
- DATABASE_TEST_URL=postgres://postgres:postgres#users-db:5432/users_test # new
depends_on: # new
- users-db
users-db: # new
build:
context: ./services/users/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
manage.py:
from flask.cli import FlaskGroup
from project import app, db # new
cli = FlaskGroup(app)
# new
#cli.command('recreate_db')
def recreate_db():
db.drop_all()
db.create_all()
db.session.commit()
if __name__ == '__main__':
cli()
Dockerfile:
# base image
FROM python:3.7.2-alpine
# new
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd && \
apk add bind-tools
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# new
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# new
# run server
CMD ["/usr/src/app/entrypoint.sh"]
entrypoint.sh
echo "Waiting for postgres..."
while ! nc -z users-db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
python manage.py run -h 0.0.0.0
config.py
import os # new
class BaseConfig:
"""Base configuration"""
TESTING = False
SQLALCHEMY_TRACK_MODIFICATIONS = False # new
class DevelopmentConfig(BaseConfig):
"""Development configuration"""
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') # new
class TestingConfig(BaseConfig):
"""Testing configuration"""
TESTING = True
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_TEST_URL') # new
class ProductionConfig(BaseConfig):
"""Production configuration"""
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') # new
init.py
import os
from flask import Flask, jsonify
from flask_restful import Resource, Api
from flask_sqlalchemy import SQLAlchemy
# instantiate the app
app = Flask(__name__)
api = Api(app)
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# instantiate the db
db = SQLAlchemy(app)
# model
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
username = db.Column(db.String(128), nullable=False)
email = db.Column(db.String(128), nullable=False)
active = db.Column(db.Boolean(), default=True, nullable=False)
def __init__(self, username, email):
self.username = username
self.email = email
class UsersPing(Resource):
def get(self):
return {
'status': 'success',
'message': 'pong!'
}
create.sql
CREATE DATABASE users_prod;
CREATE DATABASE users_dev;
CREATE DATABASE users_test;
Dockerfile in db folder:
# base image
FROM postgres:11.2-alpine
# run create.sql on init
ADD create.sql /docker-entrypoint-initdb.d
The app is expected to build successfully and run on port 5001.
However the docker-compose logs gives the following result:
users_1 | standard_init_linux.go:207: exec user process caused "exec format error"
users-db_1 | 2019-05-09 15:07:37.245 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
users-db_1 | 2019-05-09 15:07:37.245 UTC [1] LOG: listening on IPv6 address "::", port 5432
users-db_1 | 2019-05-09 15:07:37.282 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
users-db_1 | 2019-05-09 15:07:37.325 UTC [18] LOG: database system was shut down at 2019-05-09 15:07:35 UTC
users-db_1 | 2019-05-09 15:07:37.386 UTC [1] LOG: database system is ready to accept connections
I have ommited the ommited the #!/bin/sh part in the entrypoint.sh