What will happen with existing ec2 machine - terraform

I have few ec2 machines running created from aws console manually. They are not under terrafrom.
If i want to use terrafrom now to create new VPC and ec2 machine does it will delete my old machines ?

No, Terraform will not delete old machines in aws (created by aws console). That is because whenever you run terraform script to create something, it will create state file which acts as reference copy for terraform. In your case if you create any resources (like ec2 VMs) in AWS, you will end up having both machines (the one created by manually and second set created by terraform)
Read here more about terraform
https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started

Related

Is Terraform Destroying Manually created resources?

I have created some resources in Azure using Terraform such as VNETS, VMs, NSGs etc. Let's assume if I create another VM in the same VNET which was created by Terraform, I want to know if I rerun the Terraform script, will the manually created VM gets destroyed since the manually created VM is not in the state file?
No, Terraform does not interfere with resources that are created outside of terraform. It only manages resources that are included in its state file.
However, if you make manual changes to resources that you created through terraform(for example VNET in your case), terraform would reset them to what is declared in terraform code on the next run/execution.

How to copy a file from local disk to AKS VMSS (All Azure Virtual Machine Scale Sets)

Need to copy a local file (ad-hoc or during a release process) into a VMSS/Node or at least the VMSS's attached disk.
How can you copy a local file into a remote directory location on the VMSS/Node? Specifically from the command-line so that it can happen in a release pipeline (PowerShell etc).
I've read examples of using SCP but with no information on how to specifically do this with a VMSS in Azure. username#hostname doesn't really apply here or am I missing something?
I imagine every time it scales, the file previously copied will be available in every VM so this does not need to happen on every scale event?
You can set up SSH to an AKS node as detailed here using a privileged container pod. Once you have SSH-d into the node, in case of Linux nodes, from a different terminal shell you can copy files from your local machine to the AKS node using:
kubectl get pods
NAME READY STATUS RESTARTS AGE
node-debugger-aks-nodepool1-xxxxxxxx-vmssxxxxxx-xxxxx 1/1 Running 0 21s
kubectl cp /source/path node-debugger-aks-nodepoolname-xxxxxxxx-vmssxxxxxx-xxxxx:/host/path/to/destination
[Note: In the destination please remember to prepend the desired destination path on the host with /host]
In case of Windows nodes, once the SSH connection is set up as detailed here, you can copy files from your local machine to the Windows node using:
scp -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser#127.0.0.1' /source/path azureuser#<node-private-IP>:/path/to/destination
Reference for kubectl cp command
I imagine every time it scales, the file previously copied will be available in every VM so this does not need to happen on every scale event?
To the contrary, when the AKS node pool scales out, the VMSS instances are created from the VMSS Model. This Model is defined by the Microsoft.ContainerService Resource Provider. More on VMSS model and instance view here.
When you make any changes to the node's file system, the changes shall be available only to that corresponding VMSS instance. Such manual changes are not persisted if the node undergoes a node image upgrade, Kubernetes version upgrade or reconcile operation. Also if this node gets scaled down by the AKS cluster, the changes will be lost.
Instead we recommend using DaemonSets with Read-Write hostPath volume mounts where it can add files to on the host node. Since Daemonset is a Kubernetes construct and the Daemonset controller creates one replica of the Daemonset on each node (except virtual nodes; Reference) and there shall consistently be aviable even if the node undergoes an update or reconcile operation. When the node pool is scaled up new nodes shall also get a replica of the DaemonSet each.
For Azure Virtual Machine Scale Sets in general, the easiest ways to copy files between your local machine and an Azure VMSS instance would be:
SCP: if the VMSS was created with --public-ip-per-vm parameter to the az vmss create command or with API version of the Microsoft.Compute/virtualMachineScaleSets resource is at least 2017-03-30, and a publicIpAddressConfiguration JSON property to the scale set ipConfigurations section is added. For example:
"publicIpAddressConfiguration": {
"name": "pub1",
"properties": {
"idleTimeoutInMinutes": 15
}
}
If the VMSS instances do not have a public IP of their own or are assigned public IP addresses from an Azure Load Balancer (which has the VMSS as its backend pool) then create a jumpbox VM in the same virtual network as the VMSS. You can now SCP between your local machine and the jumpbox VM using the jumpbox VM's public IP and between the jumpbox VM and the VMSS instances using private IP addresses.

Passing customdata to Operating system option of azure vmss - Terraform

While we create Virtual machine scale set in azure , there is an option for passing the Custom data under Operating System like below
How can i pass the script there using terraform , there is an option custom data which seems to be used for newly created machines from terraform, but the script is not getting stored there. How do i fill this with the script i have using terraform. Any help on this would be appreciated.
From the official document, the custom_data can only be passed to the Azure VM at provisioning time.
Custom data is only made available to the VM during first boot/initial
setup, we call this 'provisioning'. Provisioning is the process where
VM Create parameters (for example, hostname, username, password,
certificates, custom data, keys etc.) are made available to the VM and
a provisioning agent processes them, such as the Linux Agent and
cloud-init.
The scripts are saved differed from the OS.
Windows
Custom data is placed in %SYSTEMDRIVE%\AzureData\CustomData.bin as a binary file, but it is not processed.
Linux
Custom data is passed to the VM via the ovf-env.xml file, which is copied to the /var/lib/waagent directory during provisioning. Newer versions of the Microsoft Azure Linux Agent will also copy the base64-encoded data to /var/lib/waagent/CustomData as well for convenience.
To upload custom_data from your local path to your Azure VM with terraform, you can use filebase64 Function.
For example, there is a test.sh script or cloud-init.txt file under the path where your main.tfor terraform.exe file exists.
custom_data = filebase64("${path.module}/test.sh")
If you are looking for executing scripts after VMSS created, you could look at the custom extension and this sample.

Can Terraform be run from inside an AWS WorkSpace?

I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?
AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.

Import terraform workspaces from S3 remote state

I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,

Resources