terraform-provider-vsphere winrm config reset upon clone customization - terraform

Environment
Vsphere 6
VM OS = Win Server 2016
terraform version = 0.11.7
terraform-provider-vsphere version = 1.4.1
Issue / Question
I've noticed that using the customization block will reset the winrm config I had preconfigured on the template.
I've attempted to work around this by configuring winrm on the fly with run_once_command_list, but that seems to operate as fire-and-forget...the provisioner is triggered prior to the command list execution (completion).
Any ideas?
Specific details can be found here ->
terraform-provider-vsphere github issue

For windows 10 you can install the built-in OpenSSH server to transfer a file or use SSH.
provisioner "file" {
source = "BuildAgent1/buildAgent.properties"
destination = "f:\\BuildAgent\\conf\\buildAgent.properties"
connection {
type = "ssh"
user = "user"
password = "password"
timeout = "30m"
}
}

Related

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

Terragrunt + Terraform with modules + GITLab

I'm using my infrastructure (IAC) at aws with terragrunt + terraform.
I already added the ssh key, GPG key to the git lab and left the repository unprotected in the branch, to do a test, but it didn't work
This would be the module call, coming to be equal to the main.tf of terraform.
# ---------------------------------------------------------------------------------------------------------------------
# Configuração do Terragrunt
# ---------------------------------------------------------------------------------------------------------------------
terragrunt = {
terraform {
source = "git::ssh://git#gitlab.compamyx.com.br:2222/x/terraform-blueprints.git//route53?ref=0.3.12"
}
include = {
path = "${find_in_parent_folders()}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# Parâmetros da Blueprint
#
zone_id = "ZDU54ADSD8R7PIX"
name = "k8s"
type = "CNAME"
ttl = "5"
records = ["tmp-elb.com"]
The point is that when I give an init terragrunt, in one of the modules I have the following error:
ssh: connect to host gitlab.company.com.br port 2222: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[terragrunt] 2020/02/05 15:23:18 Hit multiple errors:
exit status 1
I took the test
ssh -vvvv -T gitlab.companyx.com.br -p 2222
I also got timed out
This doesn't appear to be a terragrunt or terraform issue at all, but rather, an issue with SSH access to the server.
If you are getting a timeout, it seems like it's most likely a connectivity issue (i.e., a firewall/network ACL is blocking access on that port from where you are attempting to access it).
If it were an SSH key issue, you'd get an "access denied" or similar issue, but the timeout definitely leads me to believe it's connectivity.

How to configure and install nano server using DSC powershell on Windows server 2019

I have Windows Server 2019, where I want to setup Nano Server installation and Docker using DSC powershell scripts.
This requirement is for Azure VM using State Configuration from Azure Automation.
The Script
configuration Myconfig
{
Import-DscResource -ModuleName DockerMsftProvider
{
Ensure = 'present'
Module_Name = 'DockerMsftProvider'
Repository = 'PSGallery'
}
}
I know, I am missing few parameters here.. please help me in completing this script
Similarly, I need it to setup Nano server if possible.

Providing Terraform with credentials in terraform files instead of env variable

I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in
\home\mike\.config\gcloud\credentials.json
In my terraform project I have the following data referring to the remote state:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
}
}
and I specify the cloud provider with a the details of my credentials file.
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
credentials = "${file(var.credentials)}"
}
However, this runs into
data.terraform_remote_state.project_id: data.terraform_remote_state.project_id:
error initializing backend:
storage.NewClient() failed: dialing: google: could not find default
credentials.
if I add
export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json
I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login.
Error message in my case:
Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
I figured this out in the end.
Also the data needs to have the credentials.
E.g.
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config = {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
credentials = "${var.credentials}" <- added
}
}

Update application using Azure Automation DSC

How do I update application using Azure automation DSC?
When I change the configuration and upload and compile the configuration the status of the Vm node goes from Complaint to Pending status.
Then, I have to wait 30 min for the configuration to pickup the new config which then updates the application. I changed the package version too. Is there a way to force trigger the update?
Following is my code:
Configuration Deploy
{
Import-DscResource -ModuleName cWebPackageDeploy
Import-Dscresource -ModuleName PowerShellModule
node "localhost"
{
cWebPackageDeploy depwebpackage
{
Name = "website.zip"
StorageAccount = "testdeploy"
StorageKey = "xxxxxxxxxxxxxxxxxxxxxxx"
Ensure = "Present"
PackageVersion = "1.0"
DeployPath = "C:\Temp\Testdeploy"
DependsOn = "[PSModuleResource]Azure.Storage"
}
PSModuleResource Azure.Storage
{
Ensure = 'present'
Module_Name = 'Azure.Storage'
}
}
}
Deploy
There is no way of doing that using Azure Automation natively.
That being said you can always work around that by telling a vm to pull configuration with Update-DscConfiguration.
You can create a script that uploads the configuration, compiles it and forces a VM to pull from the pull server.

Resources