How to create an AWS cloud9 environment with Terraform and automatically install Software on Cloud9 environment? - terraform

how can I automatically execute a script to create a cloud9 environment and install software on the cloud9 env? For instance:
resource "aws_cloud9_environment_ec2" "cloud9" {
name = "example-env"
instance_type = "t2.small"
# how to do the line below to install kubectl?
setup-cloud9-script = "setup-cloud9.sh"
}
Thanks

Related

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

terraform-provider-local on Windows and Linux (Terraform Cloud Worker VMs)

On my Windows machine I have istioctl.exe on my PATH.
When I run this local-exec it works.
provisioner "local-exec" {
interpreter = ["bash", "-c"]
working_dir = "${path.module}/tmp"
command = <<EOH
istioctl version --remote=false;
EOH
}
For Terraform Cloud I first download istioctl and place it in ${path.module}/tmp.
But I need to change the local-exec above to ./istioctl version --remote=false;.
For TFC is there a way to add istioctl to the PATH so I do not have to use the ./?

How to configure and install nano server using DSC powershell on Windows server 2019

I have Windows Server 2019, where I want to setup Nano Server installation and Docker using DSC powershell scripts.
This requirement is for Azure VM using State Configuration from Azure Automation.
The Script
configuration Myconfig
{
Import-DscResource -ModuleName DockerMsftProvider
{
Ensure = 'present'
Module_Name = 'DockerMsftProvider'
Repository = 'PSGallery'
}
}
I know, I am missing few parameters here.. please help me in completing this script
Similarly, I need it to setup Nano server if possible.

How to fix simple http server written in python 3 using Terraform on AWS EC2 Centos

I'm initiating new AWS EC2 instance using terraform main.tf for Centos AMI. I'm able to create and connect the AWS instance.
but I have below problem
When, I start simple python 3 based http server which is simply printing "hello world", I can't able to run python script using file function from terraform. can anyone help me how to execute. shall I use function or use
resource "null_resource" "cluster" {
using interpreter?
from the outside world, I can't able to connect the public domain:exposed port(using curl http://publicip:8080). Though I have created a security group.
Can anyone help me out...Is there any possibilities, to check in terraform that these resources are indeed created in AWS EC2 instance. like some kind of debugging log.
PS: my EC2 instance has default python2.7 installed so in main.tf I tried to install python3 using to execute python script and this python script works fine in my local.
Or is there any best approach to execute this.
I'm still learning AWS using terraform.
simple-hello-world.py
from http.server import BaseHTTPRequestHandler, HTTPServer
# HTTPRequestHandler class
class testHTTPServer_RequestHandler(BaseHTTPRequestHandler):
# GET
def do_GET(self):
# Send response status code
self.send_response(200)
# Send headers
self.send_header('Content-type', 'text/html')
self.end_headers()
# Send message back to client
message = "Hello world!"
# Write content as utf-8 data
self.wfile.write(bytes(message, "utf8"))
return
def run():
print('starting server...')
# Server settings
# Choose port 8080, for port 80, which is normally used for a http server, you need root access
server_address = ('127.0.0.1', 8081)
httpd = HTTPServer(server_address, testHTTPServer_RequestHandler)
print('running server...')
httpd.serve_forever()
run()
main.tf
provider "aws" {
region = "us-east-2"
version = "~> 1.2.0"
}
resource "aws_instance" "hello-world" {
ami = "ami-ef92b08a"
instance_type = "t2.micro"
provisioner "local-exec" {
command = <<EOH
sudo yum -y update
sudo yum install -y python3.6
EOH
}
user_data = "${file("${path.module}/simple-hello-world.py")}"
tags {
Name = "my-aws-terraform-hello-world"
}
}
resource "aws_security_group" "allow-tcp" {
name = "my-aws-terraform-hello-world"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
1 - You are uploading the script, but you are not executing it. You will have to call it just like you did to install python, using local-exec.
2 - You opened port 8080, but your application runs on 8081.

Commands in user_data are not executed in terraform

Hi EC2 instance is created, but commands as part of userdata.sh are not gettingexecuted. When I manually connect to EC2 via putty, i found that nginx is not installed in EC2 instance. To verify if the script is getting executed or not I added echo message, but no output is display in command prompt when i run terraform apply. How can i verify if the user-data is getting executed or not?
I have installed Terraform in C drive and below script are present in same folder C:/Terraform/userdata.sh, C:/Terraform/main.tf, i tried giving path as ${file("./userdata.sh")}" but still it does not work.
Please advice as I am just learning terraform. Thanks.
#!/bin/bash -v
echo "userdata-start"
sudo apt-get update -y
sudo apt-get install -y nginx > /tmp/nginx.log
sudo service nginx start
echo "userdata-end"
This is been called in my terraform program [main.tf] as below:
# resource "template_file" "user_data" {
# template = "userdata.sh"
# }
data "template_file" "user_data" {
template = "${file("userdata.sh")}"
}
resource "aws_instance" "web" {
instance_type = "t2.micro"
ami = "ami-5e8bb23b"
key_name = "sptest"
vpc_security_group_ids = ["${aws_security_group.default.id}"]
subnet_id = "${aws_subnet.tf_test_subnet.id}"
user_data = "${data.template_file.user_data.template}"
#user_data = "${template_file.user_data.rendered}"
#user_data = "${file("userdata.sh")}"
#user_data = "${file("./userdata.sh")}"
tags {
Name = "tf-example-ec2"
}
}
I could see one issue with the code you have posted, the user_data variable should be like
user_data = "${data.template_file.user_data.rendered}"
Moreover, as a suggestion i will recommend you to try creating a log file in your script to check what all steps have been executed. It will also benefit you to know whether the script ran at all or not.
One sample from our code, you can modify it based on your standards
logdir=/var/log
logfile=${logdir}/mongo_setup.log
exec >> $logfile 2>&1
Hope this helps.
Why so complicated?
user_data = file("user_data.sh")
This file must exist near other tf.files of the project.
That will be enough

Resources