Unable to access postgres running inside docker container using psql command on terminal - node.js

I have postgres running inside docker container and I am able to access the postgres database using the below steps -
docker exec -it {container-id} bash
psql -U test postsdb
and I am in -
postsdb #
Also have installed kitematic and able to verify that docker pusblished IP:PORT is localhost:5432
but my app is not able to connect to postgres and giving the error -
SequelizeConnectionError: role "coder" does not exist
to debug this i tried connected to db using the postgres url from command line and what I saw strange is, it's also giving the same error
the command I used which is configured in my app as well is -
psql postgres://test:password#localhost:5432/postsdb

Related

How to connect to postgres inside docker based webapplication from pgadmin anywhere outside the docker image?

I have the following container:
admin#PC:/$docker ps -a
returns
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
9c0adfffff
hg/sample:1.1
"/usr/sbin/init"
8 days ago
Up 7 days
0.0.0.0:80->80/tcp, :::80->80/tcp
agitated_euclid
This container is a springboot webapp, that maps the application on 80:80. So, the problem is how to access the postgresql that is used by this application inside the same docker container to be accessible from:
the host linux machine containing the docker with this container? and,
any computer with pgadmin interface to connect to this docker postgresql?
Currently I'm using sudo docker exec -it 9c0adfffff bash command to connect to the docker terminal and accessing database using psql, but that doesn't satisfy my current requirement. (like this)
I also tried docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres from this answer, but I think this fires up the new container, and also that is not what I need. I need to access database of existing container, whose webapp is running on port 80 currently.

Not able to run kubectl on remote using python

I am trying to run kubectl commands on a remote server from local, but seems it is trying to run on localhost when all other commands(ls, date) I am able to run on remote.
command = 'kubectl get pod -n namespace'
stdin,stdout,stderr=ssh.exec_command(command)
Error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Any idea how to run kubectl on remote to connect to a pod and run some command there inside the pod?
Check if you are ssh connection is non-root user, this error comes when you try to execute the kubectl commands as non-root user.
Solutions:
Add sudo as prefix to the commands. This will solve the issue with current user itself.
Make the ssh connection with root user.

Docker-Compose Postgres 5432 Bind: Address already in use error

Alright so I've been searching for 3-4 days now and I can't seem to find a solution to my error. I have found similar errors, and some that seemed to be describing my exact issue but none of the solutions worked.
I'm trying to get a simply postgres, express, node app up and running. I want to be able to run docker-compose up -d and have it build all of the images, volumes, etc. Then I will run a bash script that will seed my postgres db with data. However, I keep getting an error with the ports I'm using. I've removed all of my images, containers, and even reinstalled docker but I can't figure it out. I removed everything from my docker-compose except for the postgres and it still doesn't work.
version: '3'
services:
postgres:
image: postgres:10.0
volumes:
- /var/lib/postgresql/data
ports:
- '5432:5432'
Then on my host machine I simply plan on running the following bash script.
#!/bin/bash
host="postgres.dev"
adminUser="postgres"
psql -h $host -U $adminUser -c "select 1 from pg_database where datname = 'table_name';" | grep -q 1 || psql -h $host -U $adminUser -f ./"$(dirname $0)"/init-local-db.sql
I know that this approach should work since I'm patterning it after a work project...which works for them. But here's the error I get:
ERROR: for pg-db Cannot start service postgres: driver failed programming external connectivity on endpoint pg-db (b3e5697cd563264250479682ec83d4a232d0d4bd679a787ad2089e944dda9e2f): Error starting userland proxy: listen tcp 0.0.0Creating test-api ... done
ERROR: for postgres Cannot start service postgres: driver failed programming external connectivity on endpoint pg-db (b3e5697cd563264250479682ec83d4a232d0d4bd679a787ad2089e944dda9e2f): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
ERROR: Encountered errors while bringing up the project.
I know some people say to simply change the port number so '5433:5432' but my problem with this is that when you install postgres by default its port is 5432, and I know it's not necessary to change it(because the work project doesn't). Any thoughts?
Update (next morning):
Okay I still have no idea why the error popped up in the first place, as I used lsof -i tcp:5432 (along with netstat) and nothing came up as using that port. I put my computer in suspend mode and went to bed. In the morning I woke up, changed my postgres version to 9.6 to see if that was it, and everything worked. I then switched it back to postgres 10.0 and again everything worked. Hopefully it won't pop back up again, but I have no idea why it popped up in the first place.
There is only one reason you may be getting this error. You have PostgreSQL installed on your local machine and it's running, occupying the port 5432.
You have the following options:
Disable (and remove from startup) PostgreSQL on your local machine. - Your Docker Compose will run.
sudo service postgresql stop
sudo update-rc.d postgresql disable
Use a different port in docker-compose. There is nothing wrong with applying '5433:5432'. Other services of your docker-compose will connect to postgres by 5432 port. From your local machine you'll be able you address postgres by localhost:5433.

How to connect mongodb to the application server?

I have 2 ec2 instances running one for application and other for mongodb,
I have set up the mongodb instance as follows:
1.)Installed docker
2.)docker run -d -p 27017:27017 --name mongodb -v ~/data:/data/db mongo
3.)Container created, checked mongod services are started.
4.) connected to it with the mongo, docker run -it --link mongodb:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/mydatabase"'
After executing the above command mongod shell is started where I found the uri as mongod://xx.xxx.xx.xx:27017/mydatabase
5.)I have checked my ec2 instace ip:27017 which connects me to mongodb.
Question1: I want to connect it to my application server which is running on different ec2 instance, so I have provided the uri in the config file which is present above, mongod://xx.xxx.xx.xx:27017/mydatabase but when connecting to app-server I am getting the 502 bad gateway error.
How do I connect this? Is the process correct?

Node npm test to seeded postgres, Docker network container seeing varying results

I am running an Angular Node npm test against a seeded postgres database. This works in one environment but not another.
I have exported the images to and moved them from one environment to the other, and the results indicate that there's a difference with the environments and not the docker images. I also have everything checked into git, and have done clean builds of the images.
pg=$(docker run -d postgres-seeded)
docker run -it --net=container:$pg nodeapp npm test
Here's the matrix of what I've seen works/ and not works
machine | npm test result
mac os X - docker 1.10.3 | success
|
webstorm local to docker to post-|
gres-seeded - mac os X docker | success
|
amazon ami on AWS docker 1.9.1 | error below
amazon ami on AWS docker 1.10.3 | error below
ubuntu 14.04 on AWS docker 1.10.3| success on one instance, error below
the error when it fails is not able to connect to postgres
4) AccessPermissionsRoutes "before all" hook:
Error: The genericPool is not initialized.
at Pool_PG.Pool.acquire (/usr/src/app/node_modules/knex/lib/pool.js:57:14)
at /usr/src/app/node_modules/knex/lib/client.js:33:10
at tryCatcher (/usr/src/app/node_modules/knex/node_modules/bluebird/js/main/util.js:26:23)
at Promise._resolveFromResolver (/usr/src/app/node_modules/knex/node_modules/bluebird/js/main/promise.js:480:31)
at new Promise (/usr/src/app/node_modules/knex/node_modules/bluebird/js/main/promise.js:70:37)
at Client_PG.Client.acquireConnection (/usr/src/app/node_modules/knex/lib/client.js:31:10)
I was able to solve this issue. It has something to do with Postgres being left in an unstable state after seeding. It doesn't happen all the time, so not sure when it does or does not.
so the solution is to use a NEW postgres image everytime, and seed the data with a volume container.
seeded=$(docker run --name seeded_1-v /var/lib/postgresql/data -d busybox true)
pg=$(docker run --volumes-from $seeded -d postgres)
docker run -e DATABASE_URL=postgres://postgres#localhost:5432 --net=container:$pg seeding-image
where seeding-image is your custom docker image that seeds a postgres database.
Recreate and reseed the Postgres container and try again. Most likely that will fix the connection issue you see in the code.

Resources