Docker. Failed to establish a new connection: [Errno 111] Connection refused' - python-3.x

I want to send post request from one container to another, both are flask apps.
When i push send button in form my request cant be sended with error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=5000): Max
retries exceeded with url: /users/ (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe0e5738ac0>: Failed to
establish a new connection: [Errno 111] Connection refused'))
Im trying to run it just on localhost. When i use docker-compose up, everything is ok, until im trying send request.
app 1 code (app, that im trying to send request from):
from settings import app, db
from flask import jsonify, request
from models import User
#app.route('/users/', methods=['POST'])
def users_post():
if request.json:
new_user = User(
email=request.json['email'],
first_name=request.json['first_name'],
last_name=request.json['last_name'],
password=request.json['password'])
db.session.add(new_user)
db.session.commit()
return jsonify({'msg': 'user succesfully added'})
else:
return jsonify({'msg': 'request should be in json format'})
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
dockerfile container 1:
FROM python:3
COPY . ./app
WORKDIR /app
RUN pip3 install -r requirements.txt
EXPOSE 5000 5050
CMD ["python3", "app.py"]
app 2 code:
#app.route('/', methods=['GET', 'POST'])
def users_get():
if request.method == 'POST':
request.form['email']
data = {
'email':request.form['email'],
'first_name':request.form['first_name'],
'last_name':request.form['last_name'],
'password':request.form['password']
}
r = requests.post('http://0.0.0.0:5000/users/', data=data)
print(r.text)
return render_template('index.html')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5050)
dockerfile in app 2 is similar to first one.
docker-compose
version: '3'
services:
web:
build: ./core
command: python3 app.py
volumes:
- .:/core
ports:
- "5000:5000"
links:
- new_app
new_app:
build: ./new_app
command: python3 app.py
volumes:
- .:/new_app
ports:
- "5050:5050"
What did i missed?

app1 is missing the port, you should add it:
app.run(debug=True, host='0.0.0.0', port=5000)
when calling to app1 from app2 you should use its host instead of 0.0.0.0:
r = requests.post('http://web:5000/users/', data=data)

Related

Error 500 when trying to send file from remote server (python code) to an SFTP server

I have a Flask API in a Docker container where I do the following :
from flask import Flask, request
import os
import json
import paramiko
import subprocess
app = Flask(__name__)
#app.route("/")
def hello():
return "Service up in K8S!"
#app.route("/get", methods=['GET'])
def get_ano():
print("Test liveness")
return "Pod is alive !"
#app.route("/run", methods=['POST'])
def run_dump_generation():
rules_str = request.headers.get('database')
print(rules_str)
postgres_bin = r"/usr/bin/"
dump_file = "database_dump.sql"
os.environ['PGPASSWORD'] = 'XXXXX'
print('Before dump generation')
with open(dump_file, "w") as f:
result = subprocess.call([
os.path.join(postgres_bin, "pg_dump"),
"-Fp",
"-d",
"XX",
"-U",
"XX",
"-h",
"XX",
"-p",
"XX"
],
stdout=f
)
print('After dump generation')
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
When I run the app in Kubernetes with Post request I have the error : "POST /run HTTP/1.1" 500
Here are the requirements.txt:
Flask==2.0.1
paramiko==3.0.0
The error comes from transport = paramiko.Transport(("X", X)). The same code works locally. I don't understand why I have this error when I am on Kubernetes. But in the logs no print are displaying, I guess it is because I have error 500. I guess it is not possible with this code to send file from this container to the SFTP server (has OpenSSH).
What can I do ?
---- UPDATE ----
I think I have found the problem. In a Flask pod (VM) I try to send file from this pod to SFTP server. So I have to modify the following code to "allow" this type of send. This an SFTP server with OpenSSH.
Here is the code where to modify :
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
SFTP server (with OpenSSH) is alpine and Flask code is in Alpine container too.
UPDATE BELOW, I tried the following :
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
session = ssh.connect(hostname="X", port=X, username='X', password="X")
print(ssh)
But I have the following error :
File "c:/Users/X/dump_generator_api/t.py", line 32, in <module>
session = ssh.connect(hostname="X", port=X, username='X', password="X")
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 449, in connect
self._auth(
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 780, in _auth
raise saved_exception
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 767, in _auth
self._transport.auth_password(username, password)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\transport.py", line 1567, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\auth_handler.py", line 259, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

Authenticate aws configure credentials inside Docker

I wrote a python flask API that takes an image as an input, uploads it on an S3 bucket and then process it in a function. When I run it locally, it works just fine. But when I run it through Docker image, it gives botocore.exceptions.NoCredentialsError: Unable to locate credentials.
My python code:
import boto3
s3BucketName = "textract_bucket"
region_name = 'ap-southeast-1'
aws_access_key_id = 'advdsav',
aws_secret_access_key = 'sfvsdvsdvsdvvfbdf'
session = boto3.Session(
aws_access_key_id = aws_access_key_id,
aws_secret_access_key = aws_secret_access_key,
)
s3 = session.resource('s3')
# Amazon Textract client
textractmodule = boto3.client('textract', region_name = region_name)
def extract_text(doc_name):
response = textractmodule.detect_document_text(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': doc_name,
}
})
extracted_items = []
for item in response["Blocks"]:
if item["BlockType"] == "LINE":
extracted_items.append(item["Text"])
return extracted_items
The flask API:
#app.route('/text_extract', methods = ['GET', 'POST'])
def upload_file():
if request.method == 'POST':
img = request.files['file']
file = secure_filename(img.filename)
bucket.Object(file).put(Body=img)
output = extract_text(file)
return {'results': output}
app.run(host="0.0.0.0")
Dockerfile:
FROM python:3.7
RUN apt update
RUN apt install -y libgl1-mesa-glx
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
The docker commands that I ran are:
docker build -t text_extract .
and then: docker run -p 5000:5000 text_extract
When I run the API and do a post request, I get the botocore.exceptions.NoCredentialsError error.
How can I fix this? Thanks

requests.exceptions.ConnectionError: HTTPConnectionPool(host='docker.for.linux.localhost', port=8000): Max retries exceeded with url: /api/user

In ubuntu 20.04 LTS, I am getting this error
requests.exceptions.ConnectionError: HTTPConnectionPool(host='docker.for.linux.localhost', port=8000): Max retries exceeded with url: /api/user
Actually I have made two docker images for flask and django.
In flask app,
my main.py:
from dataclasses import dataclass
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
from sqlalchemy import UniqueConstraint
import requests
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = 'mysql://root:root#db/main'
CORS(app)
db = SQLAlchemy(app)
#dataclass
class Product(db.Model):
id: int
title: str
image: str
id = db.Column(db.Integer, primary_key=True, autoincrement=False)
title = db.Column(db.String(200))
image = db.Column(db.String(200))
#dataclass
class ProductUser(db.Model):
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer)
product_id = db.Column(db.Integer)
UniqueConstraint('user_id', 'product_id', name='user_product_unique')
#app.route('/api/products')
def index():
return jsonify(Product.query.all())
#app.route('/api/products/<int:id>/like', methods=['POST'])
def like(id):
req = requests.get('http://docker.for.linux.localhost:8000/api/user')
return jsonify(req.json())
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
my Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install -r requirements.txt
COPY . /app
my docker-compose.yaml file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py'
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: 'python3 -u consumer.py'
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
when I am entering the url in postman
http://localhost:8001/api/products/1/like
by post method
I am getting the above error mentioned in the question
Can anyone tell how to fix this solution?
I am using Windows 10 for me http://docker.for.win.localhost:8000/api/user worked. you can try using lin or check and confirm the Linux abbreviation for this.
req = requests.get('http://docker.for.win.localhost:8000/api/user')
json = req.json()
I am using Ubuntu 22.04 LTS, and for me http://172.17.0.1:8000/api/user worked.
To get 172.17.0.1, I ran ifconfig in Terminal and looked for docker0.

Docker, Ubuntu 18.04 python3.7.2: standard_init_linux.go:207: exec user process caused "exec format error"

I am trying to set up the docker services following the link https://testdriven.io/courses/microservices-with-docker-flask-and-react/part-one-postgres-setup/
The code I already have:
Code structure
docker-compose-dev.yml
services/
users/
manage.py
Dockerfile-dev
entrypoint.sh
project/
__init__.py
config.py
db/
create.sql
Dockerfile
docker-compose.yml
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_APP=project/__init__.py
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#users-db:5432/users_dev # new
- DATABASE_TEST_URL=postgres://postgres:postgres#users-db:5432/users_test # new
depends_on: # new
- users-db
users-db: # new
build:
context: ./services/users/project/db
dockerfile: Dockerfile
ports:
- 5435:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
manage.py:
from flask.cli import FlaskGroup
from project import app, db # new
cli = FlaskGroup(app)
# new
#cli.command('recreate_db')
def recreate_db():
db.drop_all()
db.create_all()
db.session.commit()
if __name__ == '__main__':
cli()
Dockerfile:
# base image
FROM python:3.7.2-alpine
# new
# install dependencies
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev && \
apk add postgresql-dev && \
apk add netcat-openbsd && \
apk add bind-tools
# set working directory
WORKDIR /usr/src/app
# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# new
# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# add app
COPY . /usr/src/app
# new
# run server
CMD ["/usr/src/app/entrypoint.sh"]
entrypoint.sh
echo "Waiting for postgres..."
while ! nc -z users-db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
python manage.py run -h 0.0.0.0
config.py
import os # new
class BaseConfig:
"""Base configuration"""
TESTING = False
SQLALCHEMY_TRACK_MODIFICATIONS = False # new
class DevelopmentConfig(BaseConfig):
"""Development configuration"""
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') # new
class TestingConfig(BaseConfig):
"""Testing configuration"""
TESTING = True
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_TEST_URL') # new
class ProductionConfig(BaseConfig):
"""Production configuration"""
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') # new
init.py
import os
from flask import Flask, jsonify
from flask_restful import Resource, Api
from flask_sqlalchemy import SQLAlchemy
# instantiate the app
app = Flask(__name__)
api = Api(app)
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# instantiate the db
db = SQLAlchemy(app)
# model
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
username = db.Column(db.String(128), nullable=False)
email = db.Column(db.String(128), nullable=False)
active = db.Column(db.Boolean(), default=True, nullable=False)
def __init__(self, username, email):
self.username = username
self.email = email
class UsersPing(Resource):
def get(self):
return {
'status': 'success',
'message': 'pong!'
}
create.sql
CREATE DATABASE users_prod;
CREATE DATABASE users_dev;
CREATE DATABASE users_test;
Dockerfile in db folder:
# base image
FROM postgres:11.2-alpine
# run create.sql on init
ADD create.sql /docker-entrypoint-initdb.d
The app is expected to build successfully and run on port 5001.
However the docker-compose logs gives the following result:
users_1 | standard_init_linux.go:207: exec user process caused "exec format error"
users-db_1 | 2019-05-09 15:07:37.245 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
users-db_1 | 2019-05-09 15:07:37.245 UTC [1] LOG: listening on IPv6 address "::", port 5432
users-db_1 | 2019-05-09 15:07:37.282 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
users-db_1 | 2019-05-09 15:07:37.325 UTC [18] LOG: database system was shut down at 2019-05-09 15:07:35 UTC
users-db_1 | 2019-05-09 15:07:37.386 UTC [1] LOG: database system is ready to accept connections
I have ommited the ommited the #!/bin/sh part in the entrypoint.sh

Resources