How to set up Terraform "COMMAND: REMOTE CONFIG" with consul - file-sharing

I have a server I am using for self healing and auto scaling of a consul cluster. It does this by with terraform scripts that are run by consul watches and health checks.
I want to add an additional backup terraform server for failover. To do this I must share the terraform.tfstate and terraform.tfstate.backup between my servers so that they can run terraform on the same resources. I would like to share these files using the Terraform "COMMAND: REMOTE CONFIG", but to me it is unclear as to how I would begin the share.
Basically I want the terraform.tfstate and terraform.tfstate.backup files to constantly be in sync on both servers. Here is my attempt at setting this up. Note that both terraform servers are running consul clients connected to the rest of my consul cluster:
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=/consul-scripts/terr/terraform.tfstate" \
-backend-config="backup=/consul-scripts/terr/terraform.tfstate.backup" \
-path="/consul-scripts/terr/terraform.tfstate" \
-backup="/consul-scripts/terr/terraform.tfstate.backup" \
-address="localhost:8500"
However this was obviously the wrong syntax. When trying to run the consul example provided on the linked documentation I received the following output:
username#hostname:/consul-scripts/terr$ terraform remote config \
> -backend=consul \
> -backend-config="address=localhost:8500" \
> -backend-config="path=/consul-scripts/terr/terraform.tfstate"
Error writing backup state file: open terraform.tfstate.backup: permission denied
I would like to have my terraform servers sync up through the Terraform "COMMAND: REMOTE CONFIG" instead of a normal file share system like glusterfs or something.
How can I correctly sync my terraform files in this way?

So yea #Martin Atkins got it right I just had to run the example in the documentation with sudo. Using the terraform remote config will store the .tfstate files in a hidden directory .terraform/ within the directory containing the terraform scripts.
How terraform remote config works is that it creates a key value in consul that contains the details of the tfstate file.
The answer is very close to what is listed in the documentation. In practice using terraform remote config is a 3 step process.
Before running terraform the following should be run to pull the current tfstate file:
#this will pull the current tfstate file
#if none exists it will create the tfstate key value
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=tfstate" \
pull
Then run:
terraform apply
After this is finished run the following to push the updated tfstate file out to consul in order to change the key value:
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=tfstate" \
push

Related

How to replace default certificates on a cloud2edge instance?

I deployed a cloud2edge instance and now i want to replace the default certificates with other ones generated with the create_certs.sh script. According to the Hono documentation i can specify the configuration (including the certificates path) in the values.yaml, but i am not sure how to do it with the cloud2edge package.
Where should i take a look in order to achieve my goal?
Is there any possibility to set the certificates path without re-installing the package?
This is what i did in order to replace the keys/certificate for the mqtt adapter:
Create a secret containing the keys and the certificate
kubectl create secret generic mqtt-key-cert --from-file=certs/mqtt-adapter-cert.pem --from-file=mqtt-adapter-key.pem -n $NS
Mount the secret into the adapter's container filesystem
helm upgrade -n $NS --set hono.adapters.mqtt.extraSecretMounts.tls.secretName="mqtt-key-cert" --set hono.adapters.mqtt.extraSecretMounts.tls.mountPath="/etc/tls" --reuse-values $RELEASE eclipse-iot/cloud2edge
Set the corresponding environment variables into the mqtt adapter deployment
kubectl edit deployments c2e-adapter-mqtt-vertx -n $NS
YAML:

python fabric3 executing boto3 functionality on remote ec2 instance

I am trying to execute boto3 functionality from local machine via python fabric3 scripts.
Configuration on local machine:
installed python3.5 fabric3
script using fabric3 to create aws rds snapshot.
ssh auth store in ssh-add ~/.ssh/ec2.pem
Configuration on aws EC2 instance:
created ~/.aws/config and ~/.aws/credentials ans store required config like:
a. region=, output in ~/.aws/config aws_access_key_id=,
b. aws_secret_access_key in ~/.aws/credentials
rds is open to ec2 instance only.
Observation:
while executing fabric script from local machine it ask for botocore.exceptions.NoRegionError: You must specify a region.
if I provides region name while in boto3.client(region_name='')
it will ask for
botocore.exceptions.NoCredentialsError: Unable to locate credentials
means python fabric doesn't pick up the ~/.aws/config and ~/.aws/credentials files.
1. Does python fabric pickup the credential & configs from ~/.aws? because I don't want to provides the credential via fabric script
2. What is the standard way to achieve the fabric based deployment on aws-ec2
For time-being while creating boto3.client() I passed the required aws_access_key_id aws_secret_access_key, but still problem is that why boto3 client not picking up the .aws/config and .aws/cred files while triggered via Python Fabric script.

Run ansible-playbook with a user-data script on an EC2 instance

I am using Packer with Ansible to create an AWS EC2 image (AMI). Ansible is used to install Java 8, install the database (Cassandra), install Ansible and upload an Ansible playbook (I know that I should push the playbook to git and pull it but I will do it when this is working). I am installing Ansible and uploading the playbook, because I have to change some of the Cassandra properties when an instance is launched from the AMI (add the current instance IP in the Cassandra options for example). In order to accomplish this I wrote a simple bash script, that is added as the user-data-file property. This is the script:
#cloud-boothook
#!/bin/bash
#cloud-config
output: {all: '| tee -a /var/log/cloud-init-output.log'}
ansible-playbook -i "localhost," -c local /usr/local/etc/replace_cassandra.yaml
As you can see I am executing the ansible-playbook in a localhost mode.
The problem is that when I start the instance, I am finding an error inside the /var/log/cloud-init.log file. The error states, that ansible-playbook could not be found. So I added an ls line inside the user-data script to check the content of the /usr/bin/ folder (the folder where Ansible is installed) and there were no Ansible in it, but when I access the instance with ssh I can see that Ansible is present inside the /usr/bin/ folder and there is no problem executing the ansible-playbook.
Has anyone encountered a similar problem? I think that this should be a quite popular use case for Ansible with EC2.
EDIT
After some logging I found out that not only there is no Ansible, during the execution of the user data, but the database is missing as well.
Is it possible, that some of the code (or all of it) in the Ansible provisioner in Packer, is executed when the instance is launched?
EDIT2
I have found out what is happening here. When I add the user data via packer trough the user_data_file property, the user data is executed when packer lunches an instance to build the AMI. The script is launched before the Ansible provisioner is executed, and that is why Ansible is missing.
What I want to do is to automatically add a user data to the AMI, so that when an instance is launched from the AMI, the user data will be executed then, and not when packer builds the said AMI.
Any ideas on how to do this?
Just run multiple provisioners and don't try to run ansible via cloud-init.
I'm making an assumption here that your playbook and roles are stored locally where you are starting the packer run from. Instead of shoehorning the ansible stuff into user data, run a shell provisioner to install ansbile, run the ansible-local provisioner to run the playbook/role you want.
Below is a simplified example of what I'm talking about. It won't run without some more values in the builder config but I left those out for the sake of brevity.
In the example json, the install-prereqs.sh just adds the ansible ppa apt repo and runs apt-get update -y, then installs ansible.
#!/bin/bash
sudo apt-get install software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
The second provisioner will then copy the playbook and roles you specify to the target host and run them.
{
"builders": [
{
"type": "amazon-ebs",
"ssh_username": "ubuntu",
"image_name": "some-name",
"source_image": "some-ami-id",
"ssh_pty": true
}
],
"provisioners": [
{
"type": "shell",
"script": "scripts/install-prereqs.sh"
},
{
"type": "ansible-local",
"playbook_file": "path/to/playbook.yml",
"role_paths": ["path/to/roles"]
},
]
}
This is possible! Please make sure of the following.
An Ansible server (install ansible via cloud formation userdata if not built into AMI) and your target have SSH access in the security groups you create in cloudformation.
After you install ansible on the ansible server, your ansible.cfg file points to a private key on the ansible server
The matching public key for the ansible private key is copied to the authorized_keys file on the servers in the root user .ssh directory you wish to run playbooks on
-You have enabled root ssh access between the ansible server and target server(s), this can be done by editing the the /etc/ssh/sshd_config file and making sure there is nothing preventing the SSH access from the root user in the root authorized_keys file on the target server(s)

Bash with AWS CLI - unable to locate credentials

I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".
I have specified my credentials with the aws configure command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?
This is my script
#!/bin/bash
AWS_CONFIG_FILE="~/.aws/config"
echo $1
sudo mkfs -t ext4 $1
sudo mkdir /s3-backup-test
sudo chmod -R ugo+rw /s3-backup-test
sudo mount $1 /s3-backup-test
sudo aws s3 sync s3://backup-test-s3 /s3-backup/test
du -h /s3-backup-test
ipt (short version):
Thanks for any help!
sudo will change the $HOME directory (and therefore ~) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.
Make sure you did sudo aws configure for example. And try
sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'
You might prefer to remove all the sudo from inside the script, and just sudo the script itself.
While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.
Run this command to see if your credentials have been set:aws configure list
To set the credentials, run this command: aws configure and then enter the credentials that are specified in your ~/.aws/credentials file.
The unable to locate credentials error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.
Notice that you don't need to fill all the credentials via aws configure each time - you just need to reference to the relevant profile that was configured once.
From the Named profiles section in AWS docs:
The AWS CLI supports using any of multiple named profiles that are
stored in the config and credentials files. You can configure
additional profiles by using aws configure with the --profile option,
or by adding entries to the config and credentials files.
The following example shows a credentials file with two profiles. The
first [default] is used when you run a CLI command with no profile.
The second is used when you run a CLI command with the --profile user1
parameter.
~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows):
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
So, after setting up the specific named profile (user1 in the example above) via aws configure or directly in the ~/.aws/credentials file you can select the specific profile:
aws ec2 describe-instances --profile user1
Or export it to terminal:
$ export AWS_PROFILE=user1
Answering in case someone stumbles across this based on the question's title.
I had the same problem where by the AWS CLI was reporting unable to locate credentials.
I had removed the [default] set of credentials from my credentials file as I wasn't using them and didn't think they were needed. It seems that they are.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws on a specific user, and tested using sudo -H -u thatuser aws ..., but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
I had to upgrade it using pip install awscli, which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************WXYZ shared-credentials-file
secret_key ****************wxyz shared-credentials-file
region us-east-1 config-file ~/.aws/config
A foolish and cautionary tail of a rusty script slinger:
I had defined the variable HOME in my script as a place were the script should go to build the platform.
This variable overwrote the env var that defines the shell users $HOME. So the AWS command could not find ~/.aws/credentials because ~ was referencing the wrong place.
I hate to admit it, but I hope it helps saves someone some time.
Was hitting this error today when running aws cli on EC2. My situations is I could get credentials info when running aws configure list. However I am running in a corporate environment that doing things like aws kms decrypt requires PROXY. As soon as I set proxy, the aws credentials info will be gone.
export HTTP_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
export HTTPS_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
Turns out I also have to set NO_PROXY and have the ec2 metadata address in the list 169.254.169.254. Also, since you should be going via an s3 endpoint, you should normally have .amazonaws.com in the no_proxy too.
export NO_PROXY=169.254.169.254,.amazonaws.com
If you are using a .aws/config file with roles ensure sure your config file is correctly formatted. In my case I had forgotten to put the role_arn = in front of the arn. The default profile sits in the .aws/credentials file and contains the access key id and secret access key of the iam identity.
The config file contains the role details:
[profile myrole]
role_arn = arn:aws:iam::123456789012:role/My-Role
source_profile = default
mfa_serial = arn:aws:iam::987654321098:mfa/my-iam-identity
region=ap-southeast-2
You can quickly test access by calling
aws sts get-caller-identity --profile myrole
If you have MFA enabled like I have you will need to enter it when prompted.
Enter MFA code for arn:aws:iam::987654321098:mfa/my-iam-identity:
{
"UserId": "ARABCDEFGHIJKLMNOPQRST:botocore-session-15441234567",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/My-Role/botocore-session-15441234567"
}
I ran into this trying to run an aws-cli command from roots cron.
Since credentials are stored in $HOME/.aws/credentials and I initialized aws-cli through sudo, $HOME is still /home/user/. When running from cron, $HOME is /root/ and thus cron cannot find the file.
The fix was to change $HOME for the specific cron job. Example:
00 12 * * * HOME=/home/user aws s3 sync s3://...
(alternatives includes moving, copying or symlinking the .aws dir, from /home/user/ to /root/)
try adding sudo with aws command like sudo aws ec2 command and yes as meuh mentioned the awscli needs to be configured using sudo
pip install --upgrade awscli
or
pip3 install --upgrade awscli

Bash script to install AWS CLI tools

I am writing a bash script that will automatically install and configure AWS CLI tools. I am able to install AWS CLI tools but unable to configure it.
My script is something like this:
#!/bin/bash
wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
./awscli-bundle/install -b ~/bin/aws
./awscli-bundle/install -h
aws configure
AWS Access Key ID [None]: ABCDEFGHIJKLMNOP ## unable to provide this data
AWS Secret Access Key [None]: xbdwsdADDS/ssfsfa/afzfASADQASAd ## unable to provide this data
Default region name [None]: us-west-2 ## unable to provide this data
Default output format [None]: json ## unable to provide this data
I wish to do the configuration using this script too. I wish that I can provide these credentials via script so that it prevents manual entry. How can this be done?
Use a configuration file rather than the aws configure command. Create a file called ~/.aws/config that looks like this:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region=us-west-2
output=json
More info in the docs.
the best practice is to install the awscli utility by BASH and copy the file from your own specified location of 2 files
without hitting
#aws configure
command these files will not get created, you can copy and paste the files using bash script and get all the execution done
~/.aws/credintials
~/.aws/config
where credentials contains
[default]
aws_access_key_id=ABCDEFGHIJKLMNOP
aws_secret_access_key=xbdwsdADDS/ssfsfa/afzfASADQASAd
and config file contains
[default]
output=json
region=us-west-2
This will help you to keep the keys at one place and you can also push the same for your execution for any CMT tool as well like Ansible.
you additionally configure this from the command line which will create the configuration file
aws configure set aws_access_key_id ABCDEFGHIJKLMNOP
aws configure set aws_secret_access_key xbdwsdADDS/ssfsfa/afzfASADQASAd
aws configure set default.region eu-west-1

Resources