puppet ruby gem nullpo segfault - linux

Why the pg gem is crash when call it inside of puppet environment but not when are called in the pure environment ? What I've missed ?
Server: Ubuntu 18.04 latest
Puppet 5.5 masterless
postgres 10
$ /opt/puppetlabs/puppet/bin/gem install pg
Fetching: pg-1.1.4.gem (100%)
Building native extensions. This could take a while...
Successfully installed pg-1.1.4
Parsing documentation for pg-1.1.4
Installing ri documentation for pg-1.1.4
Done installing documentation for pg after 0 seconds
1 gem installed
$ run-puppet
/opt/puppetlabs/puppet/lib/ruby/gems/2.4.0/gems/pg-1.1.4/lib/pg.rb:56: [BUG] Segmentation fault at > 0x0000000000000000
ruby 2.4.9p362 (2019-10-02 revision 67824) [x86_64-linux]
Similiar code snippet what actually test connection between ruby env and host services:
require ('pg')
conn = PG.connect( dbname: "puppetdb", password: 'password2' , host: 'localhost', user: 'user2' )
conn.close()
#
Puppet::Functions.create_function(:lookup_ssl) do
begin
require ('pg')
rescue LoadError
raise Puppet::DataBinding::LookupError, "Error loading pq gem library."
end
dispatch :up do
param 'Hash', :options
param 'Hash', :search
end
def up(options, search)
data = [] #
conn = PG.connect( dbname: options['database'], password: options['password'], host: options['host'], user: options['user'] )
conn.close()
return data
end
end

My friend has this issue and he resolved by using Postgres 9.6, reinstall pg by using cmd:
gem install pg -- --with-pg-config=POSTGRES_PATH/9.6/bin/pg_config

Related

Ansible Molecule fails to create the docker instance

I am trying to get Molecule working with Docker on a Centos7 host.
The 'molecule create' command fails to create docker instance, and gives error: 'Unsupported parameters for (community.docker.docker_container) module: ....'
Checking with 'docker ps -a' confirms instance was not created.
I can confirm starting containers manually with 'docker run' works
fine.
It's only the molecule failing to start the container
I am stuck with this, and your help is highly appreciated
Here is the complete error message:
TASK [Create molecule instance(s)] *********************************************
changed: [localhost] => (item=centos7)
TASK [Wait for instance(s) creation to complete] *******************************
FAILED - RETRYING: Wait for instance(s) creation to complete (300 retries left).
failed: [localhost] (item={'started': 1, 'finished': 0, 'ansible_job_id': '353704502720.15356', 'results_file': '/home/guest/.ansible_async/353704502720.15356', 'changed': True, 'failed': False, 'item': {'image': 'test-runner', 'name': 'centos7', 'pre_build_image': True}, 'ansible_loop_var': 'item'}) => {"ansible_job_id": "353704502720.15356", "ansible_loop_var": "item", "attempts": 2, "changed": false, "finished": 1, "item": {"ansible_job_id": "353704502720.15356", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"image": "test-runner", "name": "centos7", "pre_build_image": true}, "results_file": "/home/guest/.ansible_async/353704502720.15356", "started": 1}, "msg": "Unsupported parameters for (community.docker.docker_container) module: command_handling Supported parameters include: api_version, auto_remove, blkio_weight, ca_cert, cap_drop, capabilities, cgroup_parent, cleanup, client_cert, client_key, command, comparisons, container_default_behavior, cpu_period, cpu_quota, cpu_shares, cpus, cpuset_cpus, cpuset_mems, debug, default_host_ip, detach, device_read_bps, device_read_iops, device_requests, device_write_bps, device_write_iops, devices, dns_opts, dns_search_domains, dns_servers, docker_host, domainname, entrypoint, env, env_file, etc_hosts, exposed_ports, force_kill, groups, healthcheck, hostname, ignore_image, image, init, interactive, ipc_mode, keep_volumes, kernel_memory, kill_signal, labels, links, log_driver, log_options, mac_address, memory, memory_reservation, memory_swap, memory_swappiness, mounts, name, network_mode, networks, networks_cli_compatible, oom_killer, oom_score_adj, output_logs, paused, pid_mode, pids_limit, privileged, published_ports, pull, purge_networks, read_only, recreate, removal_wait_timeout, restart, restart_policy, restart_retries, runtime, security_opts, shm_size, ssl_version, state, stop_signal, stop_timeout, sysctls, timeout, tls, tls_hostname, tmpfs, tty, ulimits, user, userns_mode, uts, validate_certs, volume_driver, volumes, volumes_from, working_dir", "stderr": "/tmp/ansible_community.docker.docker_container_payload_1djctes1/ansible_community.docker.docker_container_payload.zip/ansible_collections/community/docker/plugins/modules/docker_container.py:1193: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\n", "stderr_lines": ["/tmp/ansible_community.docker.docker_container_payload_1djctes1/ansible_community.docker.docker_container_payload.zip/ansible_collections/community/docker/plugins/modules/docker_container.py:1193: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead."]}
$ pip list | egrep "mole|docker|ansible"
ansible 2.10.7
ansible-base 2.10.17
ansible-compat 2.2.0
ansible-core 2.12.0
docker 6.0.0
molecule 4.0.1
molecule-docker 2.0.0
$ docker --version
Docker version 20.10.14, build a224086
$ ansible --version
ansible 2.10.17
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/guest/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/guest/mole3/mole3/lib/python3.8/site-packages/ansible
executable location = /home/guest/mole3/mole3/bin/ansible
python version = 3.8.11 (default, Sep 1 2021, 12:33:46) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
$ molecule --version
molecule 4.0.1 using python 3.8
ansible:2.10.17
delegated:4.0.1 from molecule
docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2
$ cat molecule/default/molecule.yml
---
dependency:
name: galaxy
enabled: False
driver:
name: docker
platforms:
- name: centos7
image: test-runner
pre_build_image: true
provisioner:
name: ansible
verifier:
name: ansible

How do I programmatically check the release status of the installed node.js version?

I want to include a check in a setup script that validates the installed node.js version is LTS. Something like this would be ideal
$ node -vvv
v16.3.0
Release v16
Status Active LTS
Codename Gallium
Initial Release 2021-04-20
Active LTS Start 2021-10-26
Maintenance LTS Start 2022-10-18
End-Of-Life 2024-04-30
Basically some way to get the information available on https://nodejs.org/en/about/releases/ but in a script. If there's some HTTP endpoint that provides this stuff in JSON that would work too.
I've looked through available options in the node, npm, nvm, and yarn CLI tools but none of them seem like they can do this.
One source of JSON info about nodejs releases is this endpoint:
https://nodejs.org/download/release/index.json
There, you get an array of objects that each look like this:
{
version: 'v16.14.1',
date: '2022-03-16',
files: [
'aix-ppc64', 'headers',
'linux-arm64', 'linux-armv7l',
'linux-ppc64le', 'linux-s390x',
'linux-x64', 'osx-arm64-tar',
'osx-x64-pkg', 'osx-x64-tar',
'src', 'win-x64-7z',
'win-x64-exe', 'win-x64-msi',
'win-x64-zip', 'win-x86-7z',
'win-x86-exe', 'win-x86-msi',
'win-x86-zip'
],
npm: '8.5.0',
v8: '9.4.146.24',
uv: '1.43.0',
zlib: '1.2.11',
openssl: '1.1.1m+quic',
modules: '93',
lts: 'Gallium',
security: false
},
If you filter that array by the lts property (any value other than false) and then sort by version, you will find the latest LTS release version and you can compare that to what is installed locally.
Here's a piece of nodejs code that gets that JSON, filters out items that aren't lts, then sorts by version. The top of the resulting array will be the latest LTS release version number. You could then programmatically compare that to what is installed locally. You could either just use process.version from within this script or you could run a child_process that captures the output from node -v.
import got from 'got';
// 'v10.16.3'
let versionRegex = /v(\d+)\.(\d+)\.(\d+)/;
// convert version string to a number for easier sorting
function calcVersion(x) {
const match = x.match(versionRegex);
if (!match) {
throw new Error(`version regex failed to match version string '${x}'`);
}
return (+match[1] * 1000000) + (+match[2] * 1000) + (+match[3]);
}
const data = await got("https://nodejs.org/download/release/index.json").json();
const lts = data.filter(item => item.lts);
// for performance reasons when sorting,
// precalculate an actual version number from the version string
lts.forEach(item => item.numVersion = calcVersion(item.version));
lts.sort((a, b) => b.numVersion - a.numVersion);
console.log("All LTS versions - sorted newest first");
console.log(lts.map(item => item.version));
console.log("Info about newest LTS version");
console.log(lts[0]);
console.log(`Newest LTS version: ${lts[0].version}`);
console.log(`Local version: ${process.version}`);
When I run this on my system at this moment, I get this output:
All LTS versions - sorted newest first
[
'v16.14.1', 'v16.14.0', 'v16.13.2', 'v16.13.1', 'v16.13.0',
'v14.19.0', 'v14.18.3', 'v14.18.2', 'v14.18.1', 'v14.18.0',
'v14.17.6', 'v14.17.5', 'v14.17.4', 'v14.17.3', 'v14.17.2',
'v14.17.1', 'v14.17.0', 'v14.16.1', 'v14.16.0', 'v14.15.5',
'v14.15.4', 'v14.15.3', 'v14.15.2', 'v14.15.1', 'v14.15.0',
'v12.22.11', 'v12.22.10', 'v12.22.9', 'v12.22.8', 'v12.22.7',
'v12.22.6', 'v12.22.5', 'v12.22.4', 'v12.22.3', 'v12.22.2',
'v12.22.1', 'v12.22.0', 'v12.21.0', 'v12.20.2', 'v12.20.1',
'v12.20.0', 'v12.19.1', 'v12.19.0', 'v12.18.4', 'v12.18.3',
'v12.18.2', 'v12.18.1', 'v12.18.0', 'v12.17.0', 'v12.16.3',
'v12.16.2', 'v12.16.1', 'v12.16.0', 'v12.15.0', 'v12.14.1',
'v12.14.0', 'v12.13.1', 'v12.13.0', 'v10.24.1', 'v10.24.0',
'v10.23.3', 'v10.23.2', 'v10.23.1', 'v10.23.0', 'v10.22.1',
'v10.22.0', 'v10.21.0', 'v10.20.1', 'v10.20.0', 'v10.19.0',
'v10.18.1', 'v10.18.0', 'v10.17.0', 'v10.16.3', 'v10.16.2',
'v10.16.1', 'v10.16.0', 'v10.15.3', 'v10.15.2', 'v10.15.1',
'v10.15.0', 'v10.14.2', 'v10.14.1', 'v10.14.0', 'v10.13.0',
'v8.17.0', 'v8.16.2', 'v8.16.1', 'v8.16.0', 'v8.15.1',
'v8.15.0', 'v8.14.1', 'v8.14.0', 'v8.13.0', 'v8.12.0',
'v8.11.4', 'v8.11.3', 'v8.11.2', 'v8.11.1', 'v8.11.0',
... 74 more items
]
Info about newest LTS version
{
version: 'v16.14.1',
date: '2022-03-16',
files: [
'aix-ppc64', 'headers',
'linux-arm64', 'linux-armv7l',
'linux-ppc64le', 'linux-s390x',
'linux-x64', 'osx-arm64-tar',
'osx-x64-pkg', 'osx-x64-tar',
'src', 'win-x64-7z',
'win-x64-exe', 'win-x64-msi',
'win-x64-zip', 'win-x86-7z',
'win-x86-exe', 'win-x86-msi',
'win-x86-zip'
],
npm: '8.5.0',
v8: '9.4.146.24',
uv: '1.43.0',
zlib: '1.2.11',
openssl: '1.1.1m+quic',
modules: '93',
lts: 'Gallium',
security: false,
numVersion: 16014001
}
Newest LTS version: v16.14.1
Local version: v16.13.2

Jest detects open redis client on travis-ci

I encountered some difficulties with redis testing on travis-ci.
Here is the redis setup code,
async function getClient() {
const redisClient = createClient({
socket: {
url: redisConfig.connectionString,
reconnectStrategy: (currentNumberOfRetries: number) => {
if (currentNumberOfRetries > 1) {
throw new Error("max retries reached");
}
return 1000;
},
},
});
try {
await redisClient.connect();
} catch (e) {
console.log(e);
}
return redisClient;
}
Here is the travis config, note that I run npm install redis because it is listed as a peer dependency.
language: node_js
node_js:
- "14"
dist: focal # ubuntu 20.04
services:
- postgresql
- redis-server
addons:
postgresql: "13"
apt:
packages:
- postgresql-13
env:
global:
- PGUSER=postgres
- PGPORT=5432 # for some reason unlike what documentation says, the port is 5432
jobs:
- NODE_ENV=ci
cache:
directories:
- node_modules
before_install:
- sudo sed -i -e '/local.*peer/s/postgres/all/' -e 's/peer\|md5/trust/g' /etc/postgresql/*/main/pg_hba.conf
- sudo service postgresql restart
- sleep 1
- postgres --version
- pg_lsclusters # shows port of postgresql, ubuntu specific command
install:
- npm i
- npm i redis
before_script:
- sudo psql -c 'create database orm_test;' -p 5432 -U postgres
script:
- npm run test-detectopen
The first issue is this missing client.connect function, whereas connection on my local machine with redis-server running works.
console.log
TypeError: redisClient.connect is not a function
at Object.getClient (/home/travis/build/sunjc826/mini-orm/src/connection/redis/index.ts:21:23)
at Function.init (/home/travis/build/sunjc826/mini-orm/src/data-mapper/index.ts:33:30)
at /home/travis/build/sunjc826/mini-orm/src/lib-test/tests/orm.test.ts:25:20
at Promise.then.completed (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/utils.js:390:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/utils.js:315:10)
at _callCircusHook (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:181:40)
at _runTestsForDescribeBlock (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:47:7)
at run (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/run.js:25:3)
at runAndTransformResultsToJestFormat (/home/travis/build/sunjc826/mini-orm/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:166:21)
The second is this open handle issue, on my local machine, even if connection fails, jest does not give such an error and exits cleanly.
Jest has detected the following 1 open handle potentially keeping Jest from exiting:
● TCPWRAP
7 |
8 | async function getClient() {
> 9 | const redisClient = createClient({
| ^
10 | socket: {
11 | url: redisConfig.connectionString,
12 | reconnectStrategy: (currentNumberOfRetries: number) => {
at RedisClient.Object.<anonymous>.RedisClient.create_stream (node_modules/redis/index.js:196:31)
at new RedisClient (node_modules/redis/index.js:121:10)
at Object.<anonymous>.exports.createClient (node_modules/redis/index.js:1023:12)
at Object.getClient (src/connection/redis/index.ts:9:23)
at Function.init (src/data-mapper/index.ts:33:30)
at src/lib-test/tests/orm.test.ts:25:20
at TestScheduler.scheduleTests (node_modules/#jest/core/build/TestScheduler.js:333:13)
at runJest (node_modules/#jest/core/build/runJest.js:387:19)
at _run10000 (node_modules/#jest/core/build/cli/index.js:408:7)
at runCLI (node_modules/#jest/core/build/cli/index.js:261:3)
It turns out that this is likely caused by redis being a peer dependency.
Listing out node-redis versions, I'm guessing the version tagged latest (as of time writing 3.1.2) was installed instead of the version 4+.
So, I moved redis to regular dependencies instead.

Login problems connecting with SQL Server in nodejs

I'm working in osx with SQL Server using a docker image to be able to use it, running:
docker run -d --name sqlserver -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myStrongPass' -e 'MSSQL_PID=Developer' -p 1433:1433 microsoft/mssql-server-linux:2017-latest
I can connect successfully in Azure Data Studio GUI with the following configuration
But the connection does not works in my nodejs code using mssql module.
const poolConnection = new sql.ConnectionPool({
database: 'myDbTest',
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
const [error, connection] = await to(poolConnection.connect());
The error always is the same:
ConnectionError: Login failed for user 'sa'
Is my first time working with SQL Server and is confusing for me the fact that I can connect correctly in the Azure Studio GUI but I can't do it in code.
I'm trying create new login users with CREATE LOGIN and give them privileges based on other post here in stackoverflow but nothing seems to work.
UPDATE:
I realize that i can connect correctly if i put master in database key.
Example:
const poolConnection = new sql.ConnectionPool({
database: 'master', <- Update here
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
1) Db that i can connect
2) Db that i want to connect but i can't.
Container error
2020-03-18 03:59:14.11 Logon Login failed for user 'sa'. Reason: Failed to open the explicitly specified database 'DoctorHoyCRM'. [CLIENT: 172.17.0.1]
I suspect a lot of people miss the sa password complexity requirement:
The password should follow the SQL Server default password policy, otherwise the container can not setup SQL server and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols. You can examine the error log by executing the docker logs command.
An example based on: Quickstart: Run SQL Server container images with Docker
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=myStr0ngP4ssw0rd" -e "MSSQL_PID=Developer" -p 1433:1433 --name sqlserver -d mcr.microsoft.com/mssql/server:2017-latest
docker start sqlserver
Checking that the docker image is running (it should not say "Exited" under STATUS)...
docker ps -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# af9f01eacab2 mcr.microsoft.com/mssql/server:2017-latest "/opt/mssql/bin/nonr…" 45 seconds ago Up 34 seconds 0.0.0.0:1433->1433/tcp sqlserver
Testing from within the docker container that SQL Server is installed and running...
docker exec -it sqlserver /opt/mssql-tools/bin/sqlcmd \
-S localhost -U "sa" -P "myStr0ngP4ssw0rd" \
-Q "select ##VERSION"
# --------------------------------------------------------------------
# Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64)
# Jan 23 2020 21:00:04
# Copyright (C) 2017 Microsoft Corporation
# Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)
Finally, testing from NodeJS...
const sql = require('mssql');
const config = {
user: 'sa',
password: 'myStr0ngP4ssw0rd',
server: 'localhost',
database: 'msdb',
};
sql.on('error', err => {
console.error('err: ', err);
});
sql.connect(config).then(pool => {
return pool.request()
.query('select ##VERSION')
}).then(result => {
console.dir(result)
}).catch(err => {
console.error('err: ', err);
});
$ node test.js
tedious deprecated The default value for `config.options.enableArithAbort` will change from `false` to `true` in the next major version of `tedious`. Set the value to `true` or `false` explicitly to silence this message. node_modules/mssql/lib/tedious/connection-pool.js:61:23
{
recordsets: [ [ [Object] ] ],
recordset: [
{
'': 'Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64) \n' +
'\tJan 23 2020 21:00:04 \n' +
'\tCopyright (C) 2017 Microsoft Corporation\n' +
'\tDeveloper Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)'
}
],
output: {},
rowsAffected: [ 1 ]
}
Hope this helps.

Elixir postgrex with poolboy example on Windows fails with 'module DBConnection.Poolboy not available'

I am exploring using Elixir for fast Postgres data imports of mixed types (CSV, JSON). Being new to Elixir, I am following the the example given in the youtube video "Fast Import and Export with Elixir and Postgrex - Elixir Hex package showcase" (https://www.youtube.com/watch?v=YQyKRXCtq4s). The basic mix application works until the point that Poolboy is introduced, i.e. Postgrex successfully loads records into the database using a single connection.
When I try to follow the Poolboy configuration, and test it by running
FastIoWithPostgrex.import("./data_with_ids.txt")
in iex or the command line, I get the following error, for which I can not determine the cause (username and password removed):
** (UndefinedFunctionError) function DBConnection.Poolboy.child_spec/1 is
undefined (module DBConnection.Poolboy is not available)
DBConnection.Poolboy.child_spec({Postgrex.Protocol, [types:
Postgrex.DefaultTypes, name: :pg, pool: DBConnection.Poolboy, pool_size: 4,
hostname: "localhost", port: 9000, username: "XXXX", password:
"XXXX", database: "ASDDataAnalytics-DEV"]})
(db_connection) lib/db_connection.ex:383: DBConnection.start_link/2
(fast_io_with_postgrex) lib/fast_io_with_postgrex.ex:8:
FastIoWithPostgrex.import/1
I am running this on Windows 10, connecting to a PostgreSQL 10.x Server through a local SSH tunnel. Here is the lib/fast_io_with_postgrex.ex file:
defmodule FastIoWithPostgrex do
#moduledoc """
Documentation for FastIoWithPostgrex.
"""
def import(filepath) do
{:ok, pid} = Postgrex.start_link(name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4,
hostname: "localhost",
port: 9000,
username: "XXXX", password: "XXXX", database: "ASDDataAnalytics-DEV")
File.stream!(filepath)
|> Stream.map(fn line ->
[id_str, word] = line |> String.trim |> String.split("\t", trim: true, parts: 2)
{id, ""} = Integer.parse(id_str)
[id, word]
end)
|> Stream.chunk_every(10_000, 10_000, [])
|> Task.async_stream(fn word_rows ->
Enum.each(word_rows, fn word_sql_params ->
Postgrex.transaction(:pg, fn conn ->
IO.inspect Postgrex.query!(conn, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
# IO.inspect Postgrex.query!(pid, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
end , pool: DBConnection.Poolboy, pool_timeout: :infinity, timeout: :infinity)
end)
end, timeout: :infinity)
|> Stream.run
end # def import(file)
end
Here is the mix.exs file:
defmodule FastIoWithPostgrex.MixProject do
use Mix.Project
def project do
[
app: :fast_io_with_postgrex,
version: "0.1.0",
elixir: "~> 1.7",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger, :poolboy, :connection]
]
end
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git",
tag: "0.1.0"},
{:postgrex, "~>0.14.1"},
{:poolboy, "~>1.5.1"}
]
end
end
Here is the config/config.exs file:
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config
config :fast_io_with_postgrex, :postgrex,
database: "ASDDataAnalytics-DEV",
username: "XXXX",
password: "XXXX",
name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4
# This configuration is loaded before any dependency and is restricted
# to this project. If another project depends on this project, this
# file won't be loaded nor affect the parent project. For this reason,
# if you want to provide default values for your application for
# 3rd-party users, it should be done in your "mix.exs" file.
# You can configure your application as:
#
# config :fast_io_with_postgrex, key: :value
#
# and access this configuration in your application as:
#
# Application.get_env(:fast_io_with_postgrex, :key)
#
# You can also configure a 3rd-party app:
#
# config :logger, level: :info
#
# It is also possible to import configuration files, relative to this
# directory. For example, you can emulate configuration per environment
# by uncommenting the line below and defining dev.exs, test.exs and such.
# Configuration from the imported file will override the ones defined
# here (which is why it is important to import them last).
#
# import_config "#{Mix.env()}.exs"
Any assistance with finding the cause of this error would be greatly appreciated!
I didn't want to goo to deep into how this isn't working, but that example is a little old, and that poolboy 1.5.1 you get pulled with deps.get is from 2015.. and the example uses elixir 1.4
Also, if you see Postgrex's mix.exs deps, you will notice your freshly installed lib (1.14) depends on elixir_ecto/db_connection 2.x
The code you are referring uses Postgres 1.13.x, which depends on {:db_connection, "~> 1.1"}. So i would expect incompatibilities.
I would play with the versions of the libs you see in examples code mix.lock file, an the elixir version if I wanted to see that working.
Maybe try lowering the Postgrex version first to something around that time (maybe between 0.12.2 and the locked version of the example).
Also, the version of elixir might have some play here, check this
Greetings!
have fun
EDIT:
You can use DBConnection.ConnectionPool instead of poolboy and get way using the latest postgrexand elixir versions, not sure about the performance difference but you can compare, just do this:
on config/config.exs (check if you need passwords, etc..)
config :fast_io_with_postgrex, :postgrex,
database: "fp",
name: :pg,
pool: DBConnection.ConnectionPool,
pool_size: 4
And in lib/fast_io_with.....ex replace both Postgrex.start_link(... lines with:
{:ok, pid} = Application.get_env(:fast_io_with_postgrex, :postgrex)
|> Postgrex.start_link
That gives me:
mix run -e 'FastIoWithPostgrex.import("./data_with_ids.txt")'
1.76s user 0.69s system 106% cpu 2.294 total
on Postgrex 0.14.1 and Elixir 1.7.3
Thank you, using your advice I got the original example to work by downgrading the dependency versions in the mix.exs file and adding a dependency to an earlier version of db_connection:
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"},
{:postgrex, "0.13.5"},
{:db_connection, "1.1.3"},
{:poolboy, "~>1.5.1"}
]
end
I will also try your suggestion of changing the code to replace Poolboy with the new pool manager in the later version of db_connection to see if that works as well.
I'm sure there was a lot of thought that went into the architecture change, however I must say there is very little out there in terms of why Poolboy was once so popular, but in the latest version of db_connection it's not even supported as a connection type.

Resources