I am following this post to run SQL script to create default table.
Here is my docker compose file
version: "3"
services:
web:
build: .
volumes:
- ./:/usr/src/app/
- /usr/src/app/node_modules
ports:
- “3000:3000”
depends_on:
- postgres
postgres:
container_name: postgres
build:
context: .
dockerfile: pgDockerfile
environment:
POSTGRES_PASSWORD: test
POSTGRES_USER: test
ports:
- 5432:5432
Here is my pgDockerfile
FROM postgres:9.6-alpine
# copy init sql
ADD 00-initial.sql /docker-entrypoint-initdb.d/
Here is my sql script
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS test (
id text NOT NULL,
title varchar(200) NOT NULL
);
I can build and run docker-compose up, and I see the following message:
postgres | CREATE DATABASE
postgres |
postgres | CREATE ROLE
postgres |
postgres |
postgres | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/00-initial-data.sql
postgres | CREATE EXTENSION
postgres | CREATE TABLE
postgres |
postgres |
postgres | LOG: received fast shutdown request
postgres | LOG: aborting any active transactions
postgres | waiting for server to shut down....LOG: autovacuum launcher shutting down
postgres | LOG: shutting down
postgres | LOG: database system is shut down
postgres | done
postgres | server stopped
postgres |
postgres | PostgreSQL init process complete; ready for start up.
postgres |
postgres | LOG: database system was shut down at 2017-03-22 21:36:16 UTC
postgres | LOG: MultiXact member wraparound protections are now enabled
postgres | LOG: database system is ready to accept connections
postgres | LOG: autovacuum launcher started
It seems like the db is shut down after the table is created. I think that’s the standard process for Postgres Docker but when I login to Postgres, I don’t see my table that is supposed to be there.
I login through
docker exec -it $(my postgres container id) sh
#su - postgres
#psql
# \d => No relations found.
I am not sure if this is the right way to create default data for Postgres.
I think that your docker works as intended, but it's not quite doing what you think. When setting POSTGRES_USER, but not POSTGRES_DB, the system will default to using the user name as database name, so your table is in the test database!
Using \l to list your databases will show the databases, and then you can use \c test to connect to the database. Once connected, \d will list your relations!
Related
I use the mcr.microsoft.com/mssql/server:2017 docker container to run a mssql server. I tried to change the collation like this:
echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Unfortunately I get this error:
No passwd entry for user 'mssql'
How is it possible to fix this error?
I created a new user with useradd mssql, but now I get this error if I run the command:
sqlservr: Unable to open /var/opt/mssql/.system/instance_id: File: pal.cpp:566 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 101
It looks the latest mcr.microsoft.com/mssql/server fix such issue, if you insist on the old, next could be the procedure to fix all user/permission issue:
cake#cake:~/20211012$ docker run --rm -it mcr.microsoft.com/mssql/server:2017-latest /bin/bash
SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
root#4fd0bdf1d21c:/# useradd mssql
root#4fd0bdf1d21c:/# mkdir -p /var/opt/mssql
root#4fd0bdf1d21c:/# chmod -R 777 /var/opt/mssql
root#4fd0bdf1d21c:/# echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Enter the collation: Configuring SQL Server...
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.
I am trying to run airflow on my windows machine using docker. Here is the link that I am following from the official doc - https://airflow.apache.org/docs/apache-airflow/2.0.1/start/docker.html.
I have created the directory structure as expected and also downloaded the docker-compose yaml file. On running 'docker-compose up airflow-init' as suggested by documentation. I get below error
airflow-init_1 |
airflow-init_1 | [2021-07-03 10:19:29,721] {cli_action_loggers.py:105} WARNING - Failed to log action with (psycopg2.errors.UndefinedTable) relation "log" does not exist
airflow-init_1 | LINE 1: INSERT INTO log (dttm, dag_id, task_id, event, execution_dat...
airflow-init_1 | ^
airflow-init_1 |
airflow-init_1 | [SQL: INSERT INTO log (dttm, dag_id, task_id, event, execution_date, owner, extra) VALUES (%(dttm)s, %(dag_id)s, %(task_id)s, %(event)s, %(execution_date)s, %(owner)s, %(extra)s) RETURNING log.id]
airflow-init_1 | [parameters: {'dttm': datetime.datetime(2021, 7, 3, 10, 19, 29, 712157, tzinfo=Timezone('UTC')), 'dag_id': None, 'task_id': None, 'event': 'cli_upgradedb', 'execution_date': None, 'owner': 'airflow', 'extra': '{"host_name": "7f142ce11611", "full_command": "[\'/home/airflow/.local/bin/airflow\', \'db\', \'upgrade\']"}'}]
From the logs its clear that the log table does not exists and airflow is trying to insert into it. Not sure though why or how this error can be fixed. I am using the original docker-compose file that is published on airflow doc page.
This is the current status of my airflow docker image
on trying to access the airflow UI using - http://localhost:8080/admin/
I get Airflow 404=lot of circles error
This is just a warning because airflow CLI tries to add an audit log to log table before the tables get created.
I have the same warning on a fresh DB initially, but then the ouptu continues.
The output should continue and you should get something like that at the end (I run it with just released 2.1.1 which I recommend you to start with):
airflow-init_1 | [2021-07-03 15:54:01,449] {manager.py:784} WARNING - No user yet created, use flask fab command to do it.
airflow-init_1 | Upgrades done
airflow-init_1 | [2021-07-03 15:54:06,899] {manager.py:784} WARNING - No user yet created, use flask fab command to do it.
airflow-init_1 | Admin user airflow created
airflow-init_1 | 2.1.1
Need to initialize the airflow dB
docker exec -ti airflow-webserver airflow db init && echo "Initialized airflow DB"
Create Admin user
docker exec -ti airflow-webserver airflow users create --role Admin --username {AIRFLOW_USER} --password {AIRFLOW_PASSWORD) -e {AIRFLOW_USER_EMAIL} -f {FIRST_NAME} -l {LAST_NAME} && echo "Created airflow admin user"
I configured odoo in aws ec2 and connecting Postgresql from rds when I run the command ./odoo-bin --config=/etc/odoo.conf and try to access from a browser, I'm getting the following error:
ERROR odoo_db odoo.modules.loading: Database odoo_db not initialized, you can force it with `-i base`
File "/opt/odoo/odoo/odoo/modules/registry.py", line 176, in __getitem__
return self.models[model_name]
KeyError: 'ir.http' - - -
and also I'm getting this error as well:
STATEMENT: SELECT latest_version FROM ir_module_module WHERE name='base'
ERROR odoo_db odoo.sql_db: bad query: SELECT latest_version FROM ir_module_module WHERE name='base'
ERROR: relation "ir_module_module" does not exist
In command line run:
./odoo-bin --addons-path=addons --database=odoo --db_user=odoo --db_password=odoo --db_host=localhost --db_port=5432 -i INIT
explicitly give db name, user and password, "-i INIT" option initialises the odoo database
The first glance issue is though the DB has created in Postgres but it has not the required odoo related setup records i.e. base setup. You can verify by directly accessing the DB and see the number of tables or browsing some tables.
It happens sometimes that you create the DB [specifically giving similar DB names as you have already created before and deleted later [its dropped from PG but still has traces in session or DB location path], it will not get initialized properly.
Solution:
Create sample DB with different name initial 4 characters different completely and check
Initialize the DB from odoo.conf file add db_name = < Your DB Name > {for experiment purpose put completely different name} and restart odoo services and check
Hope it will help. Njoy troubleshooting!
First do what #FaisalAnsari says in here (what I reference below):
*
Go to RDS and create a database in PostgreSQL and configure the
server.conf file as the given below.
;This is the password that allows database operations:
;admin_passwd = admin
db_host = rds_endpoint (after creating database you will get
rds_endpoint)
db_port = False
db_user = "user name which is created by you to the database"
db_password = "password which is created"
;addons_path =
/home/deadpool/workspace/odoo_13_community/custom_addons,
/home/deadpool/workspace/odoo_13_community/custom_addons
Then go to the command line and do the following.
Stop your odoo instance
~$ service odoo stop
Enable command line for the user odoo
~$ chsh -s /bin/bash odoo
execute odoo from command line as user odoo
~$ runuser -l odoo -c "odoo -i base -d YourRDSDatabase --db_host YourAmazonRDSHost.Address.rds.amazonaws.com -r YourRDSDatabaseUserName -w YourRDSDatabasePassword --stop-after-init"
After the initialization finished, start odoo service
~$ service odoo start
Troubleshooting :
if odoo doesn't start correctly make sure that the database user in your RDS instance have privileges at least on the database you are using.
~$ psql --host=YourAmazonRDSHost.Address.rds.amazonaws.com --port=5432 --username=YourRDSDatabaseUserName --password --dbname=YourRDSDatabase
and when you are inside postgresql type the following:
~$ grant all privileges on database YourRDSDatabase to YourRDSDatabaseUserName;
~$ \q
and try again from step 3.
Hope that Helps!!
I'm having problems creating models with sequelizer. Or more accurately I can create models and migrate them, but I cannot access, modify or delete them via psql.
I'm creating the model like this:
sequelize model:generate --name Device --attributes pid:string
output:
Sequelize CLI [Node: 4.6.1, CLI: 3.0.0, ORM: 4.22.6]
WARNING: This version of Sequelize CLI is not fully compatible with
Sequelize v4. https://github.com/sequelize/cli#sequelize-support
New model was created at
/home/matti/workspace/db_demo/server/models/device.js . New migration
was created at
/home/matti/workspace/db_demo/server/migrations/20171114134820-Device.js
.
migrating like this:
$ sequelize db:migrate
Sequelize CLI [Node: 4.6.1, CLI: 3.0.0, ORM: 4.22.6]
WARNING: This version of Sequelize CLI is not fully compatible with
Sequelize v4. https://github.com/sequelize/cli#sequelize-support
Loaded configuration file "server/config/config.json". Using
environment "development". sequelize deprecated String based operators
are now deprecated. Please use Symbol based operators for better
security, read more at
http://docs.sequelizejs.com/manual/tutorial/querying.html#operators
node_modules/sequelize/lib/sequelize.js:236:13
== 20171114134820-create-device: migrating =======
== 20171114134820-create-device: migrated (0.044s)
after this I can see the tables in psql:
device-dev=# \d
List of relations
Schema | Name | Type | Owner
--------+----------------+----------+-------
public | Devices | table | matti
public | Devices_id_seq | sequence | matti
public | SequelizeMeta | table | matti
However I cannot list the table:
device-dev=# \d Devices
Did not find any relation named "Devices".
my user credentials (using it as matti):
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
matti | Superuser, Create DB | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
used versions:
Sequelize:
Sequelize CLI [Node: 4.6.1, CLI: 3.0.0, ORM: 4.22.6]
postgres:
psql (PostgreSQL) 10.1
nodejs:
v4.6.1
npm:
2.15.9
I'm working in Ubuntu 16.04.
I've reverted the migration and dropped the db just to be certain. But it didn't have any effect. Nor did restarting the postgres.
I can create, list and delete the db:ses as user "matti" from psql. But not the ones generated by the sequelizer.
I am fairly new to playing around with Docker so hopefully this is all my fault, but I am trying to get a multihost Apache Cassandra ring setup using Docker Compose.
I have the following docker-compose.yml file
version: '2'
services:
cassandra-1:
hostname: cassandra-1
image: cassandra:latest
command: /bin/bash -c "sleep 1 && echo ' -- Pausing to let system catch up ... -->' && /docker-entrypoint.sh cassandra -f"
expose:
- 7000
- 7001
- 7199
- 9042
- 9160
# volumes: # uncomment if you desire mounts, also uncomment cluster.sh
# - ./data/cassandra-1:/var/lib/cassandra:rw
cassandra-2:
hostname: cassandra-2
image: cassandra:latest
command: /bin/bash -c "sleep 20 && echo ' -- Pausing to let system catch up ... -->' && /docker-entrypoint.sh cassandra -f"
environment:
- CASSANDRA_SEEDS=cassandra-1
links:
- cassandra-1:cassandra-1
expose:
- 7000
- 7001
- 7199
- 9042
- 9160
# volumes: # uncomment if you desire mounts, also uncomment cluster.sh
# - ./data/cassandra-2:/var/lib/cassandra:rw
This example attempts to start a 1st cassandra node (cassandra-1) and then a 2nd node (cassandra-2) in another container that should be able to use 1st node as the seed node for the standard Cassandra environment variable "CASSANDRA_SEEDS"
However when I run this, I get this sort of exception
cassandra-2_1 | WARN 07:00:35 Seed provider couldn't lookup host cassandra-1
cassandra-2_1 | Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: The
provider lists no seeds.
cassandra-2_1 | The seed provider lists no seeds.
cassandra-2_1 | ERROR 07:00:35 Exception encountered during startup: The seed provider lists no seeds.
cass2_cassandra-2_1 exited with code 3
Where the attempt to start the 2nd Cassandra node (cassandra-2) ALWAYS fails, and ends also up killing in the 1st
cass1_cassandra-1_1 exited with code 137
If I split the docker-compose.yaml file into 2 parts, where the starting of the 1st cassandra node is in one file. And just start that node using docker-compose up that's ok.
Please also note that when I take the 2 separate docker-compose.yml file route, one for "cassandra-1", and another for "cassandra-2" I AM making sure that the 2nd file uses "external_links" rather than "links". But the result is the same
I have scoured the web for other examples and everyone seems to be doing it the same way as I am. But mine just doesn't work.
Did you make sure the first node finished startup before the second node attempts to join the cluster?
In my experience, 20 seconds is not enough for the first node to finish startup. Make the second node sleep for something like 60 seconds before joining the cluster.
command: /bin/bash -c "echo ' -- Pausing to let system catch up ...' && sleep 60 && /docker-entrypoint.sh cassandra -f"
Also in the above command I swapped echo and sleep which makes more sense as you are carefully watching the terminal output.
Logs of cassandra-1 should show something like
INFO 21:21:06 Node /172.18.0.2 state jump to NORMAL
INFO 21:23:04 Handshaking version with /172.18.0.4
INFO 21:23:07 Node /172.18.0.4 is now part of the cluster
INFO 21:23:07 InetAddress /172.18.0.4 is now UP
Very container logs by executing this command on your terminal:
docker logs [OPTIONS] CONTAINER_ID
Find the container ID by executing docker ps.
If things still don't work, try to add mem_limit: 1024m to both of your Cassandra container definitions. Maybe startup fails due to limited memory resources.
This seems to only be an issue with Docker for Windows. I tried the original file on a Mac and it worked just fine