Bash script to install AWS CLI tools - linux

I am writing a bash script that will automatically install and configure AWS CLI tools. I am able to install AWS CLI tools but unable to configure it.
My script is something like this:
#!/bin/bash
wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
./awscli-bundle/install -b ~/bin/aws
./awscli-bundle/install -h
aws configure
AWS Access Key ID [None]: ABCDEFGHIJKLMNOP ## unable to provide this data
AWS Secret Access Key [None]: xbdwsdADDS/ssfsfa/afzfASADQASAd ## unable to provide this data
Default region name [None]: us-west-2 ## unable to provide this data
Default output format [None]: json ## unable to provide this data
I wish to do the configuration using this script too. I wish that I can provide these credentials via script so that it prevents manual entry. How can this be done?

Use a configuration file rather than the aws configure command. Create a file called ~/.aws/config that looks like this:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
region=us-west-2
output=json
More info in the docs.

the best practice is to install the awscli utility by BASH and copy the file from your own specified location of 2 files
without hitting
#aws configure
command these files will not get created, you can copy and paste the files using bash script and get all the execution done
~/.aws/credintials
~/.aws/config
where credentials contains
[default]
aws_access_key_id=ABCDEFGHIJKLMNOP
aws_secret_access_key=xbdwsdADDS/ssfsfa/afzfASADQASAd
and config file contains
[default]
output=json
region=us-west-2
This will help you to keep the keys at one place and you can also push the same for your execution for any CMT tool as well like Ansible.

you additionally configure this from the command line which will create the configuration file
aws configure set aws_access_key_id ABCDEFGHIJKLMNOP
aws configure set aws_secret_access_key xbdwsdADDS/ssfsfa/afzfASADQASAd
aws configure set default.region eu-west-1

Related

SSH'ing to Linux Client using AWS command line in Jenkins

I need to SSH on to my Linux box from Jenkins using AWS cli. To do so, AWS documentation states I need to use my pem key:
ssh -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
However, Jenkins does not have access to where I have the pem file stored and moving it is not an option.
I have generated a sshagent in Jenkins using my pem file, but cannot find any documentation or examples that show how replacing the path to pem file with my sshagent would work.
Does anyone have any any idea what the syntax is or could be point me in the direction of some documentation on this?
You have mixed two questions or things:
to ssh you certainly need the .pem key but not to execute the aws cli. Use below for ssh from jenkins to ec2 instance.
Instead of doing the above you can update the EC2 instance ec2-user /home/ec2-user/.ssh/authorized_keys with the public key of the jenkins user.
For executing aws cli commands if you want you need to use Access Credentials.

python fabric3 executing boto3 functionality on remote ec2 instance

I am trying to execute boto3 functionality from local machine via python fabric3 scripts.
Configuration on local machine:
installed python3.5 fabric3
script using fabric3 to create aws rds snapshot.
ssh auth store in ssh-add ~/.ssh/ec2.pem
Configuration on aws EC2 instance:
created ~/.aws/config and ~/.aws/credentials ans store required config like:
a. region=, output in ~/.aws/config aws_access_key_id=,
b. aws_secret_access_key in ~/.aws/credentials
rds is open to ec2 instance only.
Observation:
while executing fabric script from local machine it ask for botocore.exceptions.NoRegionError: You must specify a region.
if I provides region name while in boto3.client(region_name='')
it will ask for
botocore.exceptions.NoCredentialsError: Unable to locate credentials
means python fabric doesn't pick up the ~/.aws/config and ~/.aws/credentials files.
1. Does python fabric pickup the credential & configs from ~/.aws? because I don't want to provides the credential via fabric script
2. What is the standard way to achieve the fabric based deployment on aws-ec2
For time-being while creating boto3.client() I passed the required aws_access_key_id aws_secret_access_key, but still problem is that why boto3 client not picking up the .aws/config and .aws/cred files while triggered via Python Fabric script.

How to create database and user in influxdb programmatically?

In my use case I am using single ec2 instance [not a cluster]. I want to create a database and an user with all privileges programmatically? Is there a config file which I can edit and copy to the right location after influxdb is installed.
Could someone help me with this?
There isn't any config option that you can use to do that with InfluxDB itself. After starting up an instance you can use the InfluxDB HTTP to create the users. The curl command to do so would be the following:
curl "http://localhost:8086/query" --data-urlencode "q=CREATE USER myuser WITH PASSWORD 'mypass' WITH ALL PRIVILEGES"
Just run this command for each of the users you'd like to create. After that, you'll need to enabled the auth value of the [http] section of the config.
you can use ansible to setup influxb with your own recipe.
here's the ansible module documentation that you can use
http://docs.ansible.com/ansible/influxdb_database_module.html
or, any config/deploy manager that you prefer. i'd do this anyday instead of some ssh script or who knows what.
https://forge.puppet.com/tags/influxdb
chef.
https://github.com/bdangit/chef-influxdb
and also, you can use any of the above config managers to provision/manipulate your ec2 instance(s).
Use the admin token and this command (InfluxDB 2.3 CLI)
.\influx.exe user create -n yourusername -p yourpassword -o "your org name" --token admintokengoeshere

How to set up Terraform "COMMAND: REMOTE CONFIG" with consul

I have a server I am using for self healing and auto scaling of a consul cluster. It does this by with terraform scripts that are run by consul watches and health checks.
I want to add an additional backup terraform server for failover. To do this I must share the terraform.tfstate and terraform.tfstate.backup between my servers so that they can run terraform on the same resources. I would like to share these files using the Terraform "COMMAND: REMOTE CONFIG", but to me it is unclear as to how I would begin the share.
Basically I want the terraform.tfstate and terraform.tfstate.backup files to constantly be in sync on both servers. Here is my attempt at setting this up. Note that both terraform servers are running consul clients connected to the rest of my consul cluster:
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=/consul-scripts/terr/terraform.tfstate" \
-backend-config="backup=/consul-scripts/terr/terraform.tfstate.backup" \
-path="/consul-scripts/terr/terraform.tfstate" \
-backup="/consul-scripts/terr/terraform.tfstate.backup" \
-address="localhost:8500"
However this was obviously the wrong syntax. When trying to run the consul example provided on the linked documentation I received the following output:
username#hostname:/consul-scripts/terr$ terraform remote config \
> -backend=consul \
> -backend-config="address=localhost:8500" \
> -backend-config="path=/consul-scripts/terr/terraform.tfstate"
Error writing backup state file: open terraform.tfstate.backup: permission denied
I would like to have my terraform servers sync up through the Terraform "COMMAND: REMOTE CONFIG" instead of a normal file share system like glusterfs or something.
How can I correctly sync my terraform files in this way?
So yea #Martin Atkins got it right I just had to run the example in the documentation with sudo. Using the terraform remote config will store the .tfstate files in a hidden directory .terraform/ within the directory containing the terraform scripts.
How terraform remote config works is that it creates a key value in consul that contains the details of the tfstate file.
The answer is very close to what is listed in the documentation. In practice using terraform remote config is a 3 step process.
Before running terraform the following should be run to pull the current tfstate file:
#this will pull the current tfstate file
#if none exists it will create the tfstate key value
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=tfstate" \
pull
Then run:
terraform apply
After this is finished run the following to push the updated tfstate file out to consul in order to change the key value:
terraform remote config \
-backend=consul \
-backend-config="address=localhost:8500" \
-backend-config="path=tfstate" \
push

Bash with AWS CLI - unable to locate credentials

I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".
I have specified my credentials with the aws configure command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?
This is my script
#!/bin/bash
AWS_CONFIG_FILE="~/.aws/config"
echo $1
sudo mkfs -t ext4 $1
sudo mkdir /s3-backup-test
sudo chmod -R ugo+rw /s3-backup-test
sudo mount $1 /s3-backup-test
sudo aws s3 sync s3://backup-test-s3 /s3-backup/test
du -h /s3-backup-test
ipt (short version):
Thanks for any help!
sudo will change the $HOME directory (and therefore ~) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.
Make sure you did sudo aws configure for example. And try
sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'
You might prefer to remove all the sudo from inside the script, and just sudo the script itself.
While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.
Run this command to see if your credentials have been set:aws configure list
To set the credentials, run this command: aws configure and then enter the credentials that are specified in your ~/.aws/credentials file.
The unable to locate credentials error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.
Notice that you don't need to fill all the credentials via aws configure each time - you just need to reference to the relevant profile that was configured once.
From the Named profiles section in AWS docs:
The AWS CLI supports using any of multiple named profiles that are
stored in the config and credentials files. You can configure
additional profiles by using aws configure with the --profile option,
or by adding entries to the config and credentials files.
The following example shows a credentials file with two profiles. The
first [default] is used when you run a CLI command with no profile.
The second is used when you run a CLI command with the --profile user1
parameter.
~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows):
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
So, after setting up the specific named profile (user1 in the example above) via aws configure or directly in the ~/.aws/credentials file you can select the specific profile:
aws ec2 describe-instances --profile user1
Or export it to terminal:
$ export AWS_PROFILE=user1
Answering in case someone stumbles across this based on the question's title.
I had the same problem where by the AWS CLI was reporting unable to locate credentials.
I had removed the [default] set of credentials from my credentials file as I wasn't using them and didn't think they were needed. It seems that they are.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws on a specific user, and tested using sudo -H -u thatuser aws ..., but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
I had to upgrade it using pip install awscli, which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************WXYZ shared-credentials-file
secret_key ****************wxyz shared-credentials-file
region us-east-1 config-file ~/.aws/config
A foolish and cautionary tail of a rusty script slinger:
I had defined the variable HOME in my script as a place were the script should go to build the platform.
This variable overwrote the env var that defines the shell users $HOME. So the AWS command could not find ~/.aws/credentials because ~ was referencing the wrong place.
I hate to admit it, but I hope it helps saves someone some time.
Was hitting this error today when running aws cli on EC2. My situations is I could get credentials info when running aws configure list. However I am running in a corporate environment that doing things like aws kms decrypt requires PROXY. As soon as I set proxy, the aws credentials info will be gone.
export HTTP_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
export HTTPS_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
Turns out I also have to set NO_PROXY and have the ec2 metadata address in the list 169.254.169.254. Also, since you should be going via an s3 endpoint, you should normally have .amazonaws.com in the no_proxy too.
export NO_PROXY=169.254.169.254,.amazonaws.com
If you are using a .aws/config file with roles ensure sure your config file is correctly formatted. In my case I had forgotten to put the role_arn = in front of the arn. The default profile sits in the .aws/credentials file and contains the access key id and secret access key of the iam identity.
The config file contains the role details:
[profile myrole]
role_arn = arn:aws:iam::123456789012:role/My-Role
source_profile = default
mfa_serial = arn:aws:iam::987654321098:mfa/my-iam-identity
region=ap-southeast-2
You can quickly test access by calling
aws sts get-caller-identity --profile myrole
If you have MFA enabled like I have you will need to enter it when prompted.
Enter MFA code for arn:aws:iam::987654321098:mfa/my-iam-identity:
{
"UserId": "ARABCDEFGHIJKLMNOPQRST:botocore-session-15441234567",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/My-Role/botocore-session-15441234567"
}
I ran into this trying to run an aws-cli command from roots cron.
Since credentials are stored in $HOME/.aws/credentials and I initialized aws-cli through sudo, $HOME is still /home/user/. When running from cron, $HOME is /root/ and thus cron cannot find the file.
The fix was to change $HOME for the specific cron job. Example:
00 12 * * * HOME=/home/user aws s3 sync s3://...
(alternatives includes moving, copying or symlinking the .aws dir, from /home/user/ to /root/)
try adding sudo with aws command like sudo aws ec2 command and yes as meuh mentioned the awscli needs to be configured using sudo
pip install --upgrade awscli
or
pip3 install --upgrade awscli

Resources