Vagrant : An AMI must be configured via "ami" (region: #{region}) - linux

I got an error as below when run vagrant command,
# vagrant up --provider=aws
There are errors in the configuration of this machine. Please fix
the following errors and try again:
AWS Provider:
* An AMI must be configured via "ami" (region: #{region})
I'm using Vagrant 2.0.1 with vagrant-aws 0.7.2
Vagrant file:
Vagrant.configure("2") do |config|
require 'vagrant-aws'
Vagrant.configure('2') do |config|
config.vm.box = 'Vagarent'
config.vm.provider 'aws' do |aws, override|
aws.access_key_id = "xxxxxxxxxxxxxxxxxx"
aws.secret_access_key = "xxxxxxxxxxxxxxxxxxxxxxxx"
aws.keypair_name = 'ssh-keypair-name'
aws.instance_type = "t2.micro"
aws.region = 'us-west-2a'
aws.ami = 'ami-1122298f0'
aws.security_groups = ['default']
override.ssh.username = 'ubuntu'
override.ssh.private_key_path = '~/.ssh/ssh-keypair-file'
end
end
How to solve it?

us-west-2a is not a valid region name, see https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region for the full list of available region and end-points.
If you AMI is location in US West (Oregon), then you need to replace with us-west-2 in your Vagrantfile

Going through "vagrant-aws" documentation, following worked for me.
Installed "vagrant-aws" plugin with shell:
vagrant plugin install vagrant-aws
Added AWS compatible 'dummy-box' named "aws" added in config.vm.box = "aws":
vagrant box add aws https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
Created following Vagrant file:
# Require the AWS provider plugin
require 'vagrant-aws'
Vagrant.configure(2) do |config|
config.vm.box = "aws"
config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
config.vm.provider :aws do |aws, override|
aws.access_key_id = ENV['AWS_ACCESS_KEY']
aws.secret_access_key = ENV['AWS_SECRET_KEY']
aws.region = "us-east-1"
#aws.availability_zone = "us-east-1c"
# EC2 Instance AMI
aws.ami = "ami-aa2ea6d0" # Ubuntu 16.04 in US-EAST
aws.keypair_name = "awswindows" #change as per your key
aws.instance_type = "t2.micro"
aws.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => 10 }]
aws.security_groups = ["YOUR_SG"]
aws.tags = {
'Name' => 'Vagrant EC2 Instance'
}
# Credentials to login to EC2 Instance
override.ssh.username = "ubuntu"
override.ssh.private_key_path = ENV['AWS_PRIVATE_KEY']
end
end
Fired vagrant up --provider=aws.
Check once and let me know if you face any issue.

Related

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

Vagrant-Azure: Guest machine can't connect to host machine (Unable to copy SMB files)

I've been working on Vagrant only locally until now and now I want to create VM with Azure as the provider, but unfortunately I've got the error that can be seen on the image accesible through the link. I understand what it says but I have absolutely no idea how to fix it.
Error
I am also appending my Vagrantfile:
require 'vagrant-azure'
Vagrant.configure("2") do |config|
config.vm.box = 'azure'
config.vm.box_url = 'https://github.com/azure/vagrant-azure/raw/master/dummy.box'
config.vm.network "private_network", guest: 80, host: 80
config.ssh.username = 'vagrant'
config.ssh.private_key_path = '~/.ssh/id_rsa'
config.vm.synced_folder '.', '/vagrant', :disabled => true
config.vm.provider :azure do |azure, override|
azure.tenant_id = ****
azure.client_id = ****
azure.client_secret = ****
azure.subscription_id = ****
azure.tcp_endpoints = '80'
azure.vm_name = 'grafmuvivm'
azure.vm_size = 'Standard_B1s'
azure.vm_image_urn = 'Canonical:UbuntuServer:18.04-LTS:latest'
azure.resource_group_name = 'grafmuvirg'
azure.location = 'westeurope'
virtual_network_name = 'grafmuvivm-vagrantPublicIP'
end
# Declare where chef repository path
chef_repo_path = "./chef"
# Provisioning Chef-Zero
config.vm.provision :chef_zero do |chef|
# Added necessary chef attributes
chef.cookbooks_path = 'chef/cookbooks'
chef.nodes_path = 'chef/cookbooks'
#### Adding recipes ####
chef.add_recipe "api::ssh_user"
chef.add_recipe "api::grafmuvi"
# Running recipes
chef.run_list = [
'recipe[api::ssh_user]',
'recipe[api::grafmuvi]'
]
# Accept chef license
chef.arguments = "--chef-license accept"
end
end
If I run 'vagrant up --debug' it can be seen that guest machine cannot ping any of the host machine IPs.
Could someone please tell me how to properly setup networking on Vagrant? I've checked the GitHub issues related to this topic but I didn't find anything useful...
EDIT:
I worked with Vagrant but not with Vagrant-azure. But, can you change configuration in the following way and show the output:
azure.vm.network "private_network", ip: "192.168.50.10"

Installing Cassandra on Vagrant Centos using Puppet missing dsc22

I'm new to puppet. I know that cassandra is missing from yum so I figured a puppet recipe would download and install it, but it seems like locp/cassandra is just trying to install it from yum. The recipe is supposed to work, but I don't see anything on https://github.com/locp/cassandra as to why it's not working for me or any thing I need to set up before it should work.
I used librarian-puppet to install the modules in puppet/modules.
Error
==> default: Notice: /Stage[main]/Cassandra/File[/var/lib/cassandra/data]: Dependency Package[dsc22] has failures: true
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "puphpet/centos65-x64"
config.vm.provision "puppet" do |p|
p.module_path = "puppet/modules"
p.manifests_path = "puppet/manifests"
p.manifest_file = "site.pp"
end
end
puppet/manifests/site.pp
class { 'cassandra':
cluster_name => 'foobar',
listen_address => "${::ipaddress}",
}
puppet/Puppetfile
forge 'https://forgeapi.puppetlabs.com'
mod "locp/cassandra"
You could also use the cassandra::datastax_repo class. To incorporate that into the answer provided by #Frédéric-Henri, one could do the following:
class { 'cassandra::datastax_repo': } ->
class { 'cassandra':
cluster_name => 'foobar',
listen_address => "${::ipaddress}"
}
Thats probably because the repo is not configured (see here)
Add the following to your site.pp and make sure to add a require on it in your cassandra class
class repo {
yumrepo { "datastax":
descr => "DataStax Repo for Apache Cassandra",
baseurl => "http://rpm.datastax.com/community",
gpgcheck => "0",
enabled => "1";
}
}
class { 'cassandra':
cluster_name => 'foobar',
listen_address => "${::ipaddress}",
require => Yumrepo["datastax"],
}
include repo
include cassandra

No value from hiera on puppet manifests when installed foreman

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Vagrant, setuping machine for Node.js - Chef failure

My Vagrantfile:
Vagrant::Config.run do |config|
config.vm.box = "lucid32"
config.ssh.forward_agent = true
config.vm.forward_port 3000, 3000
# allow for symlinks in the app folder
config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/app", "1"]
config.vm.customize ["modifyvm", :id, "--memory", 512, "--cpus", 1]
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = "cookbooks"
chef.add_recipe "apt"
chef.add_recipe "build-essential"
chef.add_recipe "nodejs-cookbook"
chef.add_recipe "chef-hosts"
chef.add_recipe "git"
chef.json = {
"nodejs" => {
"version" => "0.8.12",
"install_method" => "source",
"npm" => "1.1.62"
},
"host_aliases" => [{
"name" => "awesomeapp",
"ip" => "127.0.0.1"
}]
}
end
end
When I run vagrant reload I got following exception from Chef:
[2012-11-25T07:58:23+01:00] ERROR: Running exception handlers
[2012-11-25T07:58:23+01:00] ERROR: Exception handlers complete
[2012-11-25T07:58:24+01:00] FATAL: Stacktrace dumped to /tmp/vagrant-chef-1/chef-stacktrace.out
[2012-11-25T07:58:24+01:00] FATAL: Chef::Exceptions::CookbookNotFound: Cookbook nodejs not found. If you're loading nodejs from another cookbook, make
sure you configure the dependency in your metadata
Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
SOLVED: In cookbook folder I renamed nodejs-cookbook to nodejs and corrected Vagrant file.
chef.add_recipe "nodejs-cookbook"
After I ran
vagrant provision
Everything were installed fine!
add_recipe should be the name of the recipe. e.g. file structure:
cookbooks/nodejs
And importantly:
cookbooks/nodejs/metadata.rb should contain: name "nodejs"

Resources