my installation of odoo 14 isn't working. its debags like :
psycopg2.OperationalError: FATAL: role "admin" does not exist
whats my config file:
db_host = localhost
db_maxconn = 64
db_name = False
db_password = paroli321
db_port = 5432
db_sslmode = prefer
db_template = template0
db_user = admin
The error said that there is no postgresql user named "admin". So create one with the following command:
$createuser admin -W --interactive
Shall the new role be a superuser? (y/n) <-- no
Shall the new role be allowed to create databases? (y/n) <-- yes
Shall the new role be allowed to create more new roles? (y/n) <-- no
Password: <-- type password here, in your case is "paroli321"
Related
I have crated a postgreSQL cloud SQL instance in GCP and I have created a user and a DB for it. I can connect to it via cloud_sql_proxy tool:
$ cloud_sql_proxy -instances=project_name:REGION:instance_name=tcp:5432 -credential_file=/path/to/key.json
I then can successfully connect to instance via psql and run queries and insert data etc in command line:
$ psql "host=127.0.0.1 port=5432 sslmode=disable dbname=myDBname user=myUser"
Password:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.3)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
Type "help" for help.
myDBname=>SELECT * FROM MyTable;
My issue is that when I try to use the sqlalchemy library and use the sample code provided in sqlalchemy example code like this:
import sqlalchemy
import os
db_config = {
"pool_size": 5,
"max_overflow": 2,
"pool_timeout": 30, # 30 seconds
"pool_recycle": 1800, # 30 minutes
}
def init_tcp_connection_engine(db_config):
db_user = "myUser"
db_pass = "myPassword"
db_name = "myDBname"
db_hostname = "127.0.0.1"
db_port = 5432
pool = sqlalchemy.create_engine(
# Equivalent URL:
# postgresql+pg8000://<db_user>:<db_pass>#<db_host>:<db_port>/<db_name>
sqlalchemy.engine.url.URL.create(
drivername="postgresql+pg8000",
username=db_user, # e.g. "my-database-user"
password=db_pass, # e.g. "my-database-password"
host=db_hostname, # e.g. "127.0.0.1"
port=db_port, # e.g. 5432
database=db_name # e.g. "my-database-name"
),
**db_config
)
# [END cloud_sql_postgres_sqlalchemy_create_tcp]
pool.dialect.description_encoding = None
return pool
def main():
db = init_tcp_connection_engine(db_config)
with db.connect() as conn:
rows = conn.execute("SELECT * FROM MyTable;").fetchall()
for row in rows:
print(row)
if __name__ == "__main__":
main()
I get the error of
Exception has occurred: ProgrammingError (note: full exception trace is shown but execution is paused at: <module>)
(pg8000.dbapi.ProgrammingError) {'S': 'FATAL', 'V': 'FATAL', 'C': '28P01', 'M': 'password authentication failed for user "myUser"', 'F': 'auth.c', 'L': '347', 'R': 'auth_failed'}
(Background on this error at: https://sqlalche.me/e/14/f405)
Any idea what is wrong and how I can resolve this?
I changed the password via webUI and then paste it into code and it worked.
I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..
So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails
I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!
I'm working in osx with SQL Server using a docker image to be able to use it, running:
docker run -d --name sqlserver -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myStrongPass' -e 'MSSQL_PID=Developer' -p 1433:1433 microsoft/mssql-server-linux:2017-latest
I can connect successfully in Azure Data Studio GUI with the following configuration
But the connection does not works in my nodejs code using mssql module.
const poolConnection = new sql.ConnectionPool({
database: 'myDbTest',
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
const [error, connection] = await to(poolConnection.connect());
The error always is the same:
ConnectionError: Login failed for user 'sa'
Is my first time working with SQL Server and is confusing for me the fact that I can connect correctly in the Azure Studio GUI but I can't do it in code.
I'm trying create new login users with CREATE LOGIN and give them privileges based on other post here in stackoverflow but nothing seems to work.
UPDATE:
I realize that i can connect correctly if i put master in database key.
Example:
const poolConnection = new sql.ConnectionPool({
database: 'master', <- Update here
server: 'localhost',
port: 1433,
password: '*******',
user: 'sa',
connectionTimeout: 5000,
options: {
encrypt: false,
},
});
1) Db that i can connect
2) Db that i want to connect but i can't.
Container error
2020-03-18 03:59:14.11 Logon Login failed for user 'sa'. Reason: Failed to open the explicitly specified database 'DoctorHoyCRM'. [CLIENT: 172.17.0.1]
I suspect a lot of people miss the sa password complexity requirement:
The password should follow the SQL Server default password policy, otherwise the container can not setup SQL server and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols. You can examine the error log by executing the docker logs command.
An example based on: Quickstart: Run SQL Server container images with Docker
docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=myStr0ngP4ssw0rd" -e "MSSQL_PID=Developer" -p 1433:1433 --name sqlserver -d mcr.microsoft.com/mssql/server:2017-latest
docker start sqlserver
Checking that the docker image is running (it should not say "Exited" under STATUS)...
docker ps -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# af9f01eacab2 mcr.microsoft.com/mssql/server:2017-latest "/opt/mssql/bin/nonr…" 45 seconds ago Up 34 seconds 0.0.0.0:1433->1433/tcp sqlserver
Testing from within the docker container that SQL Server is installed and running...
docker exec -it sqlserver /opt/mssql-tools/bin/sqlcmd \
-S localhost -U "sa" -P "myStr0ngP4ssw0rd" \
-Q "select ##VERSION"
# --------------------------------------------------------------------
# Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64)
# Jan 23 2020 21:00:04
# Copyright (C) 2017 Microsoft Corporation
# Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)
Finally, testing from NodeJS...
const sql = require('mssql');
const config = {
user: 'sa',
password: 'myStr0ngP4ssw0rd',
server: 'localhost',
database: 'msdb',
};
sql.on('error', err => {
console.error('err: ', err);
});
sql.connect(config).then(pool => {
return pool.request()
.query('select ##VERSION')
}).then(result => {
console.dir(result)
}).catch(err => {
console.error('err: ', err);
});
$ node test.js
tedious deprecated The default value for `config.options.enableArithAbort` will change from `false` to `true` in the next major version of `tedious`. Set the value to `true` or `false` explicitly to silence this message. node_modules/mssql/lib/tedious/connection-pool.js:61:23
{
recordsets: [ [ [Object] ] ],
recordset: [
{
'': 'Microsoft SQL Server 2017 (RTM-CU19) (KB4535007) - 14.0.3281.6 (X64) \n' +
'\tJan 23 2020 21:00:04 \n' +
'\tCopyright (C) 2017 Microsoft Corporation\n' +
'\tDeveloper Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)'
}
],
output: {},
rowsAffected: [ 1 ]
}
Hope this helps.
I start a new mongo instance, create a user, authorize it, but when I run "show collections", the system says that the id is not authorized. I do not know why?
# mongo admin
MongoDB shell version: 2.4.3
connecting to: admin
Server has startup warnings:
Thu May 23 18:23:56.735 [initandlisten]
Thu May 23 18:23:56.735 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
Thu May 23 18:23:56.735 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
Thu May 23 18:23:56.735 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
Thu May 23 18:23:56.735 [initandlisten]
> db = db.getSiblingDB("admin")
admin
> db.addUser({user:"sa",pwd:"sa",roles:["userAdminAnyDatabase"]})
{
"user" : "sa",
"pwd" : "75692b1d11c072c6c79332e248c4f699",
"roles" : [
"userAdminAnyDatabase"
],
"_id" : ObjectId("519deedff788eb914bc429b5")
}
> show collections\
Thu May 23 18:26:50.103 JavaScript execution failed: SyntaxError: Unexpected token ILLEGAL
> show collections
Thu May 23 18:26:52.418 JavaScript execution failed: error: {
"$err" : "not authorized for query on admin.system.namespaces",
"code" : 16550
} at src/mongo/shell/query.js:L128
> db.auth("sa","sa")
1
> show collections
Thu May 23 18:27:22.307 JavaScript execution failed: error: {
"$err" : "not authorized for query on admin.system.namespaces",
"code" : 16550
} at src/mongo/shell/query.js:L128
I had the same problem, but I found this tutorial and it helped me.
http://www.hacksparrow.com/mongodb-add-users-and-authenticate.html
use:
db.addUser('sa', 'sa')
instead of
db.addUser({user:"sa",pwd:"sa",roles:["userAdminAnyDatabase"]})
{
"user" : "sa",
"pwd" : "75692b1d11c072c6c79332e248c4f699",
"roles" : [
"userAdminAnyDatabase"
],
"_id" : ObjectId("519deedff788eb914bc429b5")
}
As Robert says, admin users has only rights to admin, not to write in databases.
So you have to create a custom user for your database. There's different ways. I have choose the dbOwner way.
(I use Ubuntu Server, mongo 2.6.3 and Robomongo)
So to do this, fisrt create your admin user like mongo says :
type mongo in your linux shell
and these command in the mongo shell :
use admin
db.createUser({user:"mongoadmin",pwd:"chooseyouradminpassword",roles:[{role:"userAdminAnyDatabase",db:"admin"}]})
db.auth("mongoadmin","chooseyouradminpassword")
exit
edit the mongo conf file with :
nano /etc/mongod.conf
You can use vi if nano is not installed.
activate authentication by uncommented/adding these line auth=true
if you want to use Robomongo from other machine change the line bind_ip=127.0.0.1 by bind_ip=0.0.0.0 (maybe you should add more protection in production).
type in linux shell :
service mongod restart
mongo
And in mongo shell :
use admin
db.auth("mongoadmin","pwd:"chooseyouradminpassword")
use doomnewdatabase
db.createUser({user:"doom",pwd:"chooseyourdoompassword",customData:{desc:"Just me as I am"},roles : [{role:"dbOwner",db:"doomnewdatabase"}]})
db.auth("doom","chooseyourdoompassword")
show collections
(customData is not required).
If you want to try if it works, type this in the mongo shell :
db.products.insert( { item: "card", qty: 15 } )
show collections
db.products.find()
Good luck ! Hope it will help you and others !
I have search this informations for hours.
I had the same problem and this is how I solved it:
db = db.getSiblingDB('admin')
db.addUser(
{ user: "mongoadmin",
pwd: "adminpass",
roles: ['clusterAdmin', 'userAdminAnyDatabase', 'readAnyDatabase'] } )
For MongoDB version 2.6 use:
db.createUser(
{
user: "testUser"
pwd: "password",
roles: [{role: "readWrite", db:"yourdatabase"}]
})
See the docs
I solved it like so
for mongoDB 2.6 + currently 3
db.createUser(
{
user: "username",
pwd: "password",
roles: [ { role: "root", db: "admin" } ]
}
)
note that for the role filed instead of userAdminAnyDatabase we use root
I would try granting the read role to the user. userAdminAnyDatabase grants the ability to administer users.