Authentication failure for user in connecting to GCP posrgreSQL cloud SQL via cloud sql proxy and using sqlalchemy in python on Ubuntu machine - python-3.x

I have crated a postgreSQL cloud SQL instance in GCP and I have created a user and a DB for it. I can connect to it via cloud_sql_proxy tool:
$ cloud_sql_proxy -instances=project_name:REGION:instance_name=tcp:5432 -credential_file=/path/to/key.json
I then can successfully connect to instance via psql and run queries and insert data etc in command line:
$ psql "host=127.0.0.1 port=5432 sslmode=disable dbname=myDBname user=myUser"
Password:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.3)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
Type "help" for help.
myDBname=>SELECT * FROM MyTable;
My issue is that when I try to use the sqlalchemy library and use the sample code provided in sqlalchemy example code like this:
import sqlalchemy
import os
db_config = {
"pool_size": 5,
"max_overflow": 2,
"pool_timeout": 30, # 30 seconds
"pool_recycle": 1800, # 30 minutes
}
def init_tcp_connection_engine(db_config):
db_user = "myUser"
db_pass = "myPassword"
db_name = "myDBname"
db_hostname = "127.0.0.1"
db_port = 5432
pool = sqlalchemy.create_engine(
# Equivalent URL:
# postgresql+pg8000://<db_user>:<db_pass>#<db_host>:<db_port>/<db_name>
sqlalchemy.engine.url.URL.create(
drivername="postgresql+pg8000",
username=db_user, # e.g. "my-database-user"
password=db_pass, # e.g. "my-database-password"
host=db_hostname, # e.g. "127.0.0.1"
port=db_port, # e.g. 5432
database=db_name # e.g. "my-database-name"
),
**db_config
)
# [END cloud_sql_postgres_sqlalchemy_create_tcp]
pool.dialect.description_encoding = None
return pool
def main():
db = init_tcp_connection_engine(db_config)
with db.connect() as conn:
rows = conn.execute("SELECT * FROM MyTable;").fetchall()
for row in rows:
print(row)
if __name__ == "__main__":
main()
I get the error of
Exception has occurred: ProgrammingError (note: full exception trace is shown but execution is paused at: <module>)
(pg8000.dbapi.ProgrammingError) {'S': 'FATAL', 'V': 'FATAL', 'C': '28P01', 'M': 'password authentication failed for user "myUser"', 'F': 'auth.c', 'L': '347', 'R': 'auth_failed'}
(Background on this error at: https://sqlalche.me/e/14/f405)
Any idea what is wrong and how I can resolve this?

I changed the password via webUI and then paste it into code and it worked.

Related

Error 500 when trying to send file from remote server (python code) to an SFTP server

I have a Flask API in a Docker container where I do the following :
from flask import Flask, request
import os
import json
import paramiko
import subprocess
app = Flask(__name__)
#app.route("/")
def hello():
return "Service up in K8S!"
#app.route("/get", methods=['GET'])
def get_ano():
print("Test liveness")
return "Pod is alive !"
#app.route("/run", methods=['POST'])
def run_dump_generation():
rules_str = request.headers.get('database')
print(rules_str)
postgres_bin = r"/usr/bin/"
dump_file = "database_dump.sql"
os.environ['PGPASSWORD'] = 'XXXXX'
print('Before dump generation')
with open(dump_file, "w") as f:
result = subprocess.call([
os.path.join(postgres_bin, "pg_dump"),
"-Fp",
"-d",
"XX",
"-U",
"XX",
"-h",
"XX",
"-p",
"XX"
],
stdout=f
)
print('After dump generation')
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
When I run the app in Kubernetes with Post request I have the error : "POST /run HTTP/1.1" 500
Here are the requirements.txt:
Flask==2.0.1
paramiko==3.0.0
The error comes from transport = paramiko.Transport(("X", X)). The same code works locally. I don't understand why I have this error when I am on Kubernetes. But in the logs no print are displaying, I guess it is because I have error 500. I guess it is not possible with this code to send file from this container to the SFTP server (has OpenSSH).
What can I do ?
---- UPDATE ----
I think I have found the problem. In a Flask pod (VM) I try to send file from this pod to SFTP server. So I have to modify the following code to "allow" this type of send. This an SFTP server with OpenSSH.
Here is the code where to modify :
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put('database_dump.sql', remote_file)
print("SFTP object", sftp)
SFTP server (with OpenSSH) is alpine and Flask code is in Alpine container too.
UPDATE BELOW, I tried the following :
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
session = ssh.connect(hostname="X", port=X, username='X', password="X")
print(ssh)
But I have the following error :
File "c:/Users/X/dump_generator_api/t.py", line 32, in <module>
session = ssh.connect(hostname="X", port=X, username='X', password="X")
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 449, in connect
self._auth(
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 780, in _auth
raise saved_exception
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\client.py", line 767, in _auth
self._transport.auth_password(username, password)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\transport.py", line 1567, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\site-packages\paramiko\auth_handler.py", line 259, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.

Login problems connecting with SQL Server in nodejs

I'm working in osx with SQL Server using a docker image to be able to use it, running:
docker run -d --name sqlserver -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myStrongPass' -e 'MSSQL_PID=Developer' -p 1433:1433 microsoft/mssql-server-linux:2017-latest
I can connect successfully in Azure Data Studio GUI with the following configuration
But the connection does not works in my nodejs code using mssql module.
const poolConnection = new sql.ConnectionPool({
database: 'myDbTest',
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
const [error, connection] = await to(poolConnection.connect());
The error always is the same:
ConnectionError: Login failed for user 'sa'
Is my first time working with SQL Server and is confusing for me the fact that I can connect correctly in the Azure Studio GUI but I can't do it in code.
I'm trying create new login users with CREATE LOGIN and give them privileges based on other post here in stackoverflow but nothing seems to work.
UPDATE:
I realize that i can connect correctly if i put master in database key.
Example:
const poolConnection = new sql.ConnectionPool({
database: 'master', <- Update here
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
1) Db that i can connect
2) Db that i want to connect but i can't.
Container error
2020-03-18 03:59:14.11 Logon Login failed for user 'sa'. Reason: Failed to open the explicitly specified database 'DoctorHoyCRM'. [CLIENT: 172.17.0.1]
I suspect a lot of people miss the sa password complexity requirement:
The password should follow the SQL Server default password policy, otherwise the container can not setup SQL server and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols. You can examine the error log by executing the docker logs command.
An example based on: Quickstart: Run SQL Server container images with Docker
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=myStr0ngP4ssw0rd" -e "MSSQL_PID=Developer" -p 1433:1433 --name sqlserver -d mcr.microsoft.com/mssql/server:2017-latest
docker start sqlserver
Checking that the docker image is running (it should not say "Exited" under STATUS)...
docker ps -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# af9f01eacab2 mcr.microsoft.com/mssql/server:2017-latest "/opt/mssql/bin/nonr…" 45 seconds ago Up 34 seconds 0.0.0.0:1433->1433/tcp sqlserver
Testing from within the docker container that SQL Server is installed and running...
docker exec -it sqlserver /opt/mssql-tools/bin/sqlcmd \
-S localhost -U "sa" -P "myStr0ngP4ssw0rd" \
-Q "select ##VERSION"
# --------------------------------------------------------------------
# Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64)
# Jan 23 2020 21:00:04
# Copyright (C) 2017 Microsoft Corporation
# Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)
Finally, testing from NodeJS...
const sql = require('mssql');
const config = {
user: 'sa',
password: 'myStr0ngP4ssw0rd',
server: 'localhost',
database: 'msdb',
};
sql.on('error', err => {
console.error('err: ', err);
});
sql.connect(config).then(pool => {
return pool.request()
.query('select ##VERSION')
}).then(result => {
console.dir(result)
}).catch(err => {
console.error('err: ', err);
});
$ node test.js
tedious deprecated The default value for `config.options.enableArithAbort` will change from `false` to `true` in the next major version of `tedious`. Set the value to `true` or `false` explicitly to silence this message. node_modules/mssql/lib/tedious/connection-pool.js:61:23
{
recordsets: [ [ [Object] ] ],
recordset: [
{
'': 'Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64) \n' +
'\tJan 23 2020 21:00:04 \n' +
'\tCopyright (C) 2017 Microsoft Corporation\n' +
'\tDeveloper Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)'
}
],
output: {},
rowsAffected: [ 1 ]
}
Hope this helps.

Python SnowflakeOperator setup snowflake_default

Good day, I cannot find how to do basic setup to airflow.contrib.operators.snowflake_operator.SnowflakeOperatorto connect to snowflake. snowflake.connector.connect works fine.
When I do it with SnowflakeOperator :
op = snowflake_operator.SnowflakeOperator(sql = "create table test(*****)", task_id = '123')
I get the
airflow.exceptions.AirflowException: The conn_idsnowflake_defaultisn't defined
I tried to insert in backend sqlite db
INSERT INTO connection(
conn_id, conn_type, host
, schema, login, password
, port, is_encrypted, is_extra_encrypted
) VALUES (*****)
But after it I get an error:
snowflake.connector.errors.ProgrammingError: 251001: None: Account must be specified.
Passing account kwarg into SnowflakeOperator constructor does not help. It seems I cannot pass account into db or into constructor, but it's required.
Please help me, let me know what data I should insert into backend local db to be able to connect via SnowflakeOperator
Go to Admin -> Connections and update snowflake_default connection like this:
based on source code airflow/contrib/hooks/snowflake_hook.py:53 we need to add extras like this:
{
"schema": "schema",
"database": "database",
"account": "account",
"warehouse": "warehouse"
}
With this context:
$ airflow version
2.2.3
$ pip install snowflake-connector-python==2.4.1
$ pip install apache-airflow-providers-snowflake==2.5.0
You have to specify the Snowflake Account and Snowflake Region twice like this:
airflow connections add 'my_snowflake_db' \
--conn-type 'snowflake' \
--conn-login 'my_user' \
--conn-password 'my_password' \
--conn-port 443 \
--conn-schema 'public' \
--conn-host 'my_account_xyz.my_region_abc.snowflakecomputing.com' \
--conn-extra '{ "account": "my_account_xyz", "warehouse": "my_warehouse", "region": "my_region_abc" }'
Otherwise it doesn't work throwing the Python exception:
snowflake.connector.errors.ProgrammingError: 251001: 251001: Account must be specified
I think this might be due to that airflow command parameter --conn-host that is expecting a full domain with subdomain (the my_account_xyz.my_region_abc), that usually for Snowflake are specified as query parameters in a way similar to this template (although I did not check all the combinations of the command airflow connections add and the DAG execution):
"snowflake://{user}:{password}#{account}{region}{cloud}/{database}/{schema}?role={role}&warehouse={warehouse}&timezone={timezone}"
Then a dummy Snowflake DAG like this SELECT 1; will find its own way to the Snowflake cloud service and will work:
import datetime
from datetime import timedelta
from airflow.models import DAG
# https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/operators/snowflake.html
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
my_dag = DAG(
"example_snowflake",
start_date=datetime.datetime.utcnow(),
default_args={"snowflake_conn_id": "my_snowflake_db"},
schedule_interval="0 0 1 * *",
tags=["example"],
catchup=False,
dagrun_timeout=timedelta(minutes=10),
)
sf_task_1 = SnowflakeOperator(
task_id="sf_task_1",
dag=my_dag,
sql="SELECT 1;",
)

Abnormal behavior of python package eve

I have installed the eve package on my windows machine but every time I shutdown the machine and try to load the eve package I get module not found error.
On re-installation attempt(Btw I used the latest pip version to install), I get
from eve import Eve
app=Eve()
app.run()
The error points to the second line.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-79-46d1b24866c8> in <module>()
30 # host = '127.0.0.1'
31
---> 32 app = Eve()
33 # app.run()
34
~\AppData\Local\Continuum\anaconda3\lib\site-packages\eve\flaskapp.py in __init__(self, import_name, settings, validator, data, auth, redis, url_converters, json_encoder, media, **kwargs)
158 self.settings = settings
159
--> 160 self.load_config()
161 self.validate_domain_struct()
162
~\AppData\Local\Continuum\anaconda3\lib\site-packages\eve\flaskapp.py in load_config(self)
275
276 try:
--> 277 self.config.from_pyfile(pyfile)
278 except:
279 raise
~\AppData\Local\Continuum\anaconda3\lib\site-packages\flask\config.py in from_pyfile(self, filename, silent)
128 try:
129 with open(filename, mode='rb') as config_file:
--> 130 exec(compile(config_file.read(), filename, 'exec'), d.__dict__)
131 except IOError as e:
132 if silent and e.errno in (
~\AppData\Local\Continuum\anaconda3\lib\site-packages\bokeh\settings.py in <module>()
9 from os.path import join, abspath, isdir
10
---> 11 from .util.paths import ROOT_DIR, bokehjsdir
12
13
ModuleNotFoundError: No module named 'config'
Moreover, I find that there is no folder "lib" but "Lib". If this is the problem how do I rectify it?
However, the code below works but runs for microsecs, not like running a back-end server with api's:
from eve import Eve
app=Eve
app.run
The settings.py file:
# Let's just use the local mongod instance. Edit as needed.
# Please note that MONGO_HOST and MONGO_PORT could very well be left
# out as they already default to a bare bones local 'mongod' instance.
MONGO_HOST = 'localhost'
MONGO_PORT = 27017
MONGO_DBNAME = 'apitest'
# Enable reads (GET), inserts (POST) and DELETE for resources/collections
# (if you omit this line, the API will default to ['GET'] and provide
# read-only access to the endpoint).
RESOURCE_METHODS = ['GET', 'POST', 'DELETE']
# Enable reads (GET), edits (PATCH), replacements (PUT) and deletes of
# individual items (defaults to read-only item access).
ITEM_METHODS = ['GET', 'PATCH', 'PUT', 'DELETE']
people = {
# 'title' tag used in item links.
'item_title': 'person',
# by default the standard item entry point is defined as
# '/people/<ObjectId>/'. We leave it untouched, and we also enable an
# additional read-only entry point. This way consumers can also perform GET
# requests at '/people/<lastname>/'.
'additional_lookup': {
'url': 'regex("[\w]+")',
'field': 'lastname'
},
'cache_control': 'max-age=10,must-revalidate',
'cache_expires': 10,
'resource_methods': ['GET', 'POST'],
# Schema definition, based on Cerberus grammar. Check the Cerberus project
# (https://github.com/pyeve/cerberus) for details.
'schema': {
'firstname': {
'type': 'string',
'minlength': 1,
'maxlength': 10,
},
'lastname': {
'type': 'string',
'minlength': 1,
'maxlength': 15,
'required': True,
# talk about hard constraints! For the purpose of the demo
# 'lastname' is an API entry-point, so we need it to be unique.
'unique': True,
},
# 'role' is a list, and can only contain values from 'allowed'.
'role': {
'type': 'list',
'allowed': ["author", "contributor", "copy"],
},
# An embedded 'strongly-typed' dictionary.
'location': {
'type': 'dict',
'schema': {
'address': {'type': 'string'},
'city': {'type': 'string'}
},
},
'born': {
'type': 'datetime',
},
}
}
DOMAIN = {
'people': people,
}
So, What could be the solution to this problem?
Any help is appreciated.
I don't have this issue after a quick test. Let me share with you all steps and let me know anything is different.
1) Enter Anaconda Prompt
2) conda create -n eswar python=3.6
3) conda activate eswar
4) pip install eve
5) python
5.1) import eve
5.2) exit()
6) shutdown windows machine
7) restart windows machine
8) enter anaconda prompt
9) conda activate eswar
10) python
11) from eve import Eve
12) everything looks fine.
did you forget to activate your env after restart?

Elixir postgrex with poolboy example on Windows fails with 'module DBConnection.Poolboy not available'

I am exploring using Elixir for fast Postgres data imports of mixed types (CSV, JSON). Being new to Elixir, I am following the the example given in the youtube video "Fast Import and Export with Elixir and Postgrex - Elixir Hex package showcase" (https://www.youtube.com/watch?v=YQyKRXCtq4s). The basic mix application works until the point that Poolboy is introduced, i.e. Postgrex successfully loads records into the database using a single connection.
When I try to follow the Poolboy configuration, and test it by running
FastIoWithPostgrex.import("./data_with_ids.txt")
in iex or the command line, I get the following error, for which I can not determine the cause (username and password removed):
** (UndefinedFunctionError) function DBConnection.Poolboy.child_spec/1 is
undefined (module DBConnection.Poolboy is not available)
DBConnection.Poolboy.child_spec({Postgrex.Protocol, [types:
Postgrex.DefaultTypes, name: :pg, pool: DBConnection.Poolboy, pool_size: 4,
hostname: "localhost", port: 9000, username: "XXXX", password:
"XXXX", database: "ASDDataAnalytics-DEV"]})
(db_connection) lib/db_connection.ex:383: DBConnection.start_link/2
(fast_io_with_postgrex) lib/fast_io_with_postgrex.ex:8:
FastIoWithPostgrex.import/1
I am running this on Windows 10, connecting to a PostgreSQL 10.x Server through a local SSH tunnel. Here is the lib/fast_io_with_postgrex.ex file:
defmodule FastIoWithPostgrex do
#moduledoc """
Documentation for FastIoWithPostgrex.
"""
def import(filepath) do
{:ok, pid} = Postgrex.start_link(name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4,
hostname: "localhost",
port: 9000,
username: "XXXX", password: "XXXX", database: "ASDDataAnalytics-DEV")
File.stream!(filepath)
|> Stream.map(fn line ->
[id_str, word] = line |> String.trim |> String.split("\t", trim: true, parts: 2)
{id, ""} = Integer.parse(id_str)
[id, word]
end)
|> Stream.chunk_every(10_000, 10_000, [])
|> Task.async_stream(fn word_rows ->
Enum.each(word_rows, fn word_sql_params ->
Postgrex.transaction(:pg, fn conn ->
IO.inspect Postgrex.query!(conn, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
# IO.inspect Postgrex.query!(pid, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
end , pool: DBConnection.Poolboy, pool_timeout: :infinity, timeout: :infinity)
end)
end, timeout: :infinity)
|> Stream.run
end # def import(file)
end
Here is the mix.exs file:
defmodule FastIoWithPostgrex.MixProject do
use Mix.Project
def project do
[
app: :fast_io_with_postgrex,
version: "0.1.0",
elixir: "~> 1.7",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger, :poolboy, :connection]
]
end
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git",
tag: "0.1.0"},
{:postgrex, "~>0.14.1"},
{:poolboy, "~>1.5.1"}
]
end
end
Here is the config/config.exs file:
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config
config :fast_io_with_postgrex, :postgrex,
database: "ASDDataAnalytics-DEV",
username: "XXXX",
password: "XXXX",
name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4
# This configuration is loaded before any dependency and is restricted
# to this project. If another project depends on this project, this
# file won't be loaded nor affect the parent project. For this reason,
# if you want to provide default values for your application for
# 3rd-party users, it should be done in your "mix.exs" file.
# You can configure your application as:
#
# config :fast_io_with_postgrex, key: :value
#
# and access this configuration in your application as:
#
# Application.get_env(:fast_io_with_postgrex, :key)
#
# You can also configure a 3rd-party app:
#
# config :logger, level: :info
#
# It is also possible to import configuration files, relative to this
# directory. For example, you can emulate configuration per environment
# by uncommenting the line below and defining dev.exs, test.exs and such.
# Configuration from the imported file will override the ones defined
# here (which is why it is important to import them last).
#
# import_config "#{Mix.env()}.exs"
Any assistance with finding the cause of this error would be greatly appreciated!
I didn't want to goo to deep into how this isn't working, but that example is a little old, and that poolboy 1.5.1 you get pulled with deps.get is from 2015.. and the example uses elixir 1.4
Also, if you see Postgrex's mix.exs deps, you will notice your freshly installed lib (1.14) depends on elixir_ecto/db_connection 2.x
The code you are referring uses Postgres 1.13.x, which depends on {:db_connection, "~> 1.1"}. So i would expect incompatibilities.
I would play with the versions of the libs you see in examples code mix.lock file, an the elixir version if I wanted to see that working.
Maybe try lowering the Postgrex version first to something around that time (maybe between 0.12.2 and the locked version of the example).
Also, the version of elixir might have some play here, check this
Greetings!
have fun
EDIT:
You can use DBConnection.ConnectionPool instead of poolboy and get way using the latest postgrexand elixir versions, not sure about the performance difference but you can compare, just do this:
on config/config.exs (check if you need passwords, etc..)
config :fast_io_with_postgrex, :postgrex,
database: "fp",
name: :pg,
pool: DBConnection.ConnectionPool,
pool_size: 4
And in lib/fast_io_with.....ex replace both Postgrex.start_link(... lines with:
{:ok, pid} = Application.get_env(:fast_io_with_postgrex, :postgrex)
|> Postgrex.start_link
That gives me:
mix run -e 'FastIoWithPostgrex.import("./data_with_ids.txt")'
1.76s user 0.69s system 106% cpu 2.294 total
on Postgrex 0.14.1 and Elixir 1.7.3
Thank you, using your advice I got the original example to work by downgrading the dependency versions in the mix.exs file and adding a dependency to an earlier version of db_connection:
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"},
{:postgrex, "0.13.5"},
{:db_connection, "1.1.3"},
{:poolboy, "~>1.5.1"}
]
end
I will also try your suggestion of changing the code to replace Poolboy with the new pool manager in the later version of db_connection to see if that works as well.
I'm sure there was a lot of thought that went into the architecture change, however I must say there is very little out there in terms of why Poolboy was once so popular, but in the latest version of db_connection it's not even supported as a connection type.

Resources