SPNEGO uses wrong KRBTGT principal name - linux

I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..

So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails

Related

Odoo 14 : psycopg2.OperationalError: FATAL: role "admin" does not exist

my installation of odoo 14 isn't working. its debags like :
psycopg2.OperationalError: FATAL: role "admin" does not exist
whats my config file:
db_host = localhost
db_maxconn = 64
db_name = False
db_password = paroli321
db_port = 5432
db_sslmode = prefer
db_template = template0
db_user = admin
The error said that there is no postgresql user named "admin". So create one with the following command:
$createuser admin -W --interactive
Shall the new role be a superuser? (y/n) <-- no
Shall the new role be allowed to create databases? (y/n) <-- yes
Shall the new role be allowed to create more new roles? (y/n) <-- no
Password: <-- type password here, in your case is "paroli321"

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

probelm with puppet tagmail puppetlabs module

i'm using puppet 6.14.0 and tagmail module 3.2.0 on Centos 7.
below is my config on the master:
[master]
dns_alt_names=*******
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puippetlabs/code
confdir = /etc/puppetlabs/puppet
reports = puppetdb,console,tagmail
tagmap = $confdir/tagmail.conf
tagmail.conf(using a local smtp server, i'm able to telnet it )
[transport]
reportfrom = **********
smtpserver = localhost
smtpport = 25
smtphelo = localhost
[tagmap]
all: my_email_address
and below is my config on one managed node
[main]
certname = *********
server = *********
environment =uat
runinterval = 120
[agent]
report = true
pluginsync = true
but i'm not receiving any report from tagmail.
is someone having the same problem or i'm missing something on my config ?

docker -minio - The access key ID you provided does not exist in our records

I have a docker file that should wait for a database with wait_for_it.sh and run a minio server.
I read the secrets from run/secrets and creates the MINIO_SECRET_KEY and MINIO_ACCESS_KEY.
THE MINIO SERVER is up but I cannot connect with a minio client (js client) and I GOT the following error:
The access key ID you provided does not exist in our records
My client code:
const accessKey = fileService.readFile(configService.get('minio').access_key_file);
const secretKey = fileService.readFile(configService.get('minio').secret_key_file);
this.minioClient = new Minio.Client({
endPoint: configService.get('minio').host,
port: configService.get('minio').port,
useSSL: configService.get('minio').useSSL,
accessKey: accessKey.trim(),
secretKey: secretKey.trim()
});
my docker entry point (bash):
docker_secrets_env() {
ACCESS_KEY_FILE="$MINIO_ACCESS_KEY_FILE"
SECRET_KEY_FILE="$MINIO_SECRET_KEY_FILE"
if [ -f "$ACCESS_KEY_FILE" ] && [ -f "$SECRET_KEY_FILE" ]; then
if [ -f "$ACCESS_KEY_FILE" ]; then
MINIO_ACCESS_KEY="$(cat "$ACCESS_KEY_FILE")"
export MINIO_ACCESS_KEY
fi
if [ -f "$SECRET_KEY_FILE" ]; then
MINIO_SECRET_KEY="$(cat "$SECRET_KEY_FILE")"
export MINIO_SECRET_KEY
fi
fi
}
docker_secrets_env
./wait-for-it.sh mongo:27017 --timeout=0 --strict -- \
minio server /data & \
thanks
Try to access it directly at localhost:9000 with your preset credentials,
if that doesn't work try default credentials :
user: minioadmin
PWD: minioadmin
if this works it means the docker image wasn't run properly.

Gitlab SAML Configuration - 404 on metadata

Question regarding SAML configuration.
I'm currently running Gitlab 9.1 CE edition on CentOs 7. I have an Apache instance on the front end for a reverse proxy to Gitlab handling http(s)
My gitlab.rb has the following configured
external_url 'http://external.apache.server/gitlab/'
gitlab_rails['omniauth_enabled'] = true
gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'saml'
gitlab_rails['omniauth_block_auto_created_users'] = false
# gitlab_rails['omniauth_auto_link_ldap_user'] = false
gitlab_rails['omniauth_auto_link_saml_user'] = true
# gitlab_rails['omniauth_external_providers'] = ['twitter', 'google_oauth2']
# gitlab_rails['omniauth_providers'] = [
# {
# "name" => "google_oauth2",
# "app_id" => "YOUR APP ID",
# "app_secret" => "YOUR APP SECRET",
# "args" => { "access_type" => "offline", "approval_prompt" => "" }
# }
# ]
In order to setup SAML my provider is asking for the information returned from http://external.apache.server/gitlab/users/auth/saml/metadata which returns a 404.
In reading the SAML documentation, it mentions that Gitlab needs to be configured for SSL, not sure if this is why the URL mentioned above is returning a 404.
The problem with enabling SSL is that my external URL is already providing that and if I use it as is https://external.apache.server then Gitlab is looking for key/cert for that domain on the box which doesn't seem correct. I don't want to change the external URL as it should be fronted by Apache. Bit confused on what the proper configuration should be.
Thanks

Resources