KVM with Terraform: SSH permission denied (Cloud-Init) - terraform

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']

I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file

You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered

i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

Related

SPNEGO uses wrong KRBTGT principal name

I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..
So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails

Vagrant-Azure: Guest machine can't connect to host machine (Unable to copy SMB files)

I've been working on Vagrant only locally until now and now I want to create VM with Azure as the provider, but unfortunately I've got the error that can be seen on the image accesible through the link. I understand what it says but I have absolutely no idea how to fix it.
Error
I am also appending my Vagrantfile:
require 'vagrant-azure'
Vagrant.configure("2") do |config|
config.vm.box = 'azure'
config.vm.box_url = 'https://github.com/azure/vagrant-azure/raw/master/dummy.box'
config.vm.network "private_network", guest: 80, host: 80
config.ssh.username = 'vagrant'
config.ssh.private_key_path = '~/.ssh/id_rsa'
config.vm.synced_folder '.', '/vagrant', :disabled => true
config.vm.provider :azure do |azure, override|
azure.tenant_id = ****
azure.client_id = ****
azure.client_secret = ****
azure.subscription_id = ****
azure.tcp_endpoints = '80'
azure.vm_name = 'grafmuvivm'
azure.vm_size = 'Standard_B1s'
azure.vm_image_urn = 'Canonical:UbuntuServer:18.04-LTS:latest'
azure.resource_group_name = 'grafmuvirg'
azure.location = 'westeurope'
virtual_network_name = 'grafmuvivm-vagrantPublicIP'
end
# Declare where chef repository path
chef_repo_path = "./chef"
# Provisioning Chef-Zero
config.vm.provision :chef_zero do |chef|
# Added necessary chef attributes
chef.cookbooks_path = 'chef/cookbooks'
chef.nodes_path = 'chef/cookbooks'
#### Adding recipes ####
chef.add_recipe "api::ssh_user"
chef.add_recipe "api::grafmuvi"
# Running recipes
chef.run_list = [
'recipe[api::ssh_user]',
'recipe[api::grafmuvi]'
]
# Accept chef license
chef.arguments = "--chef-license accept"
end
end
If I run 'vagrant up --debug' it can be seen that guest machine cannot ping any of the host machine IPs.
Could someone please tell me how to properly setup networking on Vagrant? I've checked the GitHub issues related to this topic but I didn't find anything useful...
EDIT:
I worked with Vagrant but not with Vagrant-azure. But, can you change configuration in the following way and show the output:
azure.vm.network "private_network", ip: "192.168.50.10"

Terraform CLI : Error: Failed to read ssh private key: no key found

I have this variable private_key_path = "/users/arun/aws_keys/pk.pem" defined in terraform.tfvars file
and i am doing SSH in my terraform-template . see the configuration below
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
The private file is very much available in that path. But still i get the below exception thrown by the terraform-cli
Error: Failed to read ssh private key: no key found
Is there anything else am missing out ?
generate the public and private key using gitbash.
$ ssh-keygen.exe -f demo
call the demo file or copy the demo and demo.pub file to the specific directory

Vagrant : An AMI must be configured via "ami" (region: #{region})

I got an error as below when run vagrant command,
# vagrant up --provider=aws
There are errors in the configuration of this machine. Please fix
the following errors and try again:
AWS Provider:
* An AMI must be configured via "ami" (region: #{region})
I'm using Vagrant 2.0.1 with vagrant-aws 0.7.2
Vagrant file:
Vagrant.configure("2") do |config|
require 'vagrant-aws'
Vagrant.configure('2') do |config|
config.vm.box = 'Vagarent'
config.vm.provider 'aws' do |aws, override|
aws.access_key_id = "xxxxxxxxxxxxxxxxxx"
aws.secret_access_key = "xxxxxxxxxxxxxxxxxxxxxxxx"
aws.keypair_name = 'ssh-keypair-name'
aws.instance_type = "t2.micro"
aws.region = 'us-west-2a'
aws.ami = 'ami-1122298f0'
aws.security_groups = ['default']
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = '~/.ssh/ssh-keypair-file'
end
end
How to solve it?
us-west-2a is not a valid region name, see https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region for the full list of available region and end-points.
If you AMI is location in US West (Oregon), then you need to replace with us-west-2 in your Vagrantfile
Going through "vagrant-aws" documentation, following worked for me.
Installed "vagrant-aws" plugin with shell:
vagrant plugin install vagrant-aws
Added AWS compatible 'dummy-box' named "aws" added in config.vm.box = "aws":
vagrant box add aws https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Created following Vagrant file:
# Require the AWS provider plugin
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = "aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.vm.provider :aws do |aws, override|
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.region = "us-east-1"
#aws.availability_zone = "us-east-1c"
# EC2 Instance AMI
aws.ami = "ami-aa2ea6d0" # Ubuntu 16.04 in US-EAST
aws.keypair_name = "awswindows" #change as per your key
aws.instance_type = "t2.micro"
aws.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => 10 }]
aws.security_groups = ["YOUR_SG"]
aws.tags = {
'Name' => 'Vagrant EC2 Instance'
}
# Credentials to login to EC2 Instance
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY']
end
end
Fired vagrant up --provider=aws.
Check once and let me know if you face any issue.

No value from hiera on puppet manifests when installed foreman

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Resources