I'm not able to assign a role to a ec2 cluster via the spark script spark/ec2/spark-ec2. I use the following command to start the cluster:
where myprofile is a testing profile with sufficient permissions.
./spark-ec2 -k <key name> -i <aws .pem file> -s 2 -r eu-west-1 launch mycluster --instance-type=m4.large --instance-profile-name=myprofile
I can see the instances in the ec2 console where they also have the correct role assigned.
I then proceed to ssh into the master instance with:
./spark-ec2 -k <key name> -i <aws .pem file> login mycluster
and with
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/myprofile
I can view my temporal security key, access key and a security token. However, running
aws s3 list-buckets
returns
"Message": "The AWS Access Key Id you provided does not exist in our records."
Retrieving the keys via the curl command and pass them to boto does not work either, giving a '403 permission denied'..
Am I missing something?
Please see this very similar question below. But as I am not allowed to comment there and I neither have the answer to it I made a new question. Maybe someone could comment to that person with a link to my question. Thanks.
Running Spark EC2 scripts with IAM role
Ok I had this problem for 3 days and now I solved directly after posting the question... do:
sudo yum update
will update the aws cli and after that roles seems to work.
I can even do in python:
from boto.s3.connection import S3Connection
conn = S3Connection()
bucket = conn.get_bucket('my_bucket')
keys = bucket.list()
Related
I'm following the AWS supply chain workshop. I created an EC2 instance and set up a VPC just like the workshop said. Now I'm connected to the EC2 instance using SSH and I've already downloaded the required packages, setup Docker, downloaded fabric-ca-client. My problem is configuring the fabric-ca client.
When I run the command fabric-ca-client enroll with the required params/flags, it retuns the following error: Error: Failed to create default configuration file: Failed to parse URL 'https://$USER:=9_phK63?#$CA_ENDPOINT': parse https://user:password#ca_endpoint: invalid port ":=9_phK63?" after host
Here's the complete command I'm trying to run: fabric-ca-client enroll -u https://$USER\:$PASSWORD#$CA_ENDPOINT --tls.certfiles ~/managedblockchain-tls-chain.pem -M admin-msp -H $HOME
I'm wondering if the ? in the password is causing the problem. If so, where can I change it?
Workshop link for reference: https://catalog.us-east-1.prod.workshops.aws/workshops/ce1e960e-a811-475f-a221-2afcf57e386a/en-US/02-set-up-a-fabric-client/05-configure-client/06-create-fabric-admin
my name is Forrest and I am a Blockchain Specialist Solutions Architect at AWS. I'd be happy to help you with this.
When using passwords with special characters, these need to be URL-encoded. For example, $ equates to %24. As OP mentioned in comments below, there is a Javascript method encodeURIComponent() that can serve this function. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent
Please make sure your environment variables are all still correctly set as well:
echo $USER
echo $PASSWORD
echo $CA_ENDPOINT
Your CA endpoint should resolve to something like:
ca.m-XXXXXXXXXXXXX.n-XXXXXXXXXXXXXX.managedblockchain.<AWS_REGION>.amazonaws.com:30002
I am facing some login problem for accessing instance. While login to the server console (its a live server) it shows as Permission denied (publickey), Also am accessing with sudo also same issue persists. AWS instance, should reboot, no change while login issue persists.
As explained in AWS docs your key needs correct permissions:
If you are connecting from MacOS or Linux, run the following command to fix this error, substituting the path for your private key file.
chmod 0400 .ssh/my_private_key.pem
If you got a public key when you set up the server and you saved it (.pem file), you first need to change permissions to it. If in Linux cd to the directory holding the .pem file, then do this:
chmod 400 /path/to/your_public_key.pem for only-read permission.
Then with your EC2 instance public DNS ( get it in AWS EC2 console when you click on your instance ID) which is similar to ec2-x-xxx-xx.us-east-3.compute.amazonaws.com ,you can ssh into your server as follows. Assuming your user account name in the server is ubuntu like in most of the Linux based AMIs in AWS, do:
ssh -i your_public_key.pem ubuntu#ec2-x-xxx-xx.us-east-3.compute.amazonaws.com and if prompted for a password, provide it.
Good luck:)
I am working through the instructions outlined here to try and set up a Couchbase cluster on Azure Container Service (AKS). That tutorial is using terminal/Mac, and I'm using Powershell/Windows.
I'm getting an error before I even get to the Couchbase part. I successfully created a resource group (which I called "cb_ask_spike", and yes it does appear on the Portal) from the command line, but then I try to create an AKS cluster:
az aks create --resource-group cb_aks_spike --name cbakscluster
I also tried:
az aks create --resource-group cb_aks_spike --name cbakscluster --generate-ssh-keys
In both cases, I get an error:
az aks create: error: Incorrect padding
I don't know what this error message means, and I can't seem to find any reference to it in the documentation or anywhere. What am I doing wrong?
I'm using azure-cli v2.0.31.
I am fairly confident that I solved why I'm getting this error, and I've updated issue 6142 on azure-cli. At this time, I believe this is a bug, and it's not fixed, but there is a workaround.
First it's important to note that --generate-ssh generates a new ssh key in ~/.ssh
I had a hunch that since ~ for me is "C:\Users\Matthew Groves" that the space in the path was causing the problem. Sure enough, I created a new account called "mgroves". ~ is now "C:\Users\mgroves" and voila, I don't get the "incorrect padding" error message anymore.
So, the workaround is either to use a new account (huge pain) or rename the folder (this is what I have done, and it's also a huge pain and I'm still finding little problems here and there all throughout my system because of it.
In addition to the now approved answer there is a solution that doesn't require you to change any directory or account name and is also easy to implement as well.
As correctly stated in the other answers the Azure CLI cannot handle the actual location where the generated SSH keys will be stored if there is a space in the path. I.e. C:\Users\Admin Account\.ssh\.
When using the az aks create command you can either use --generate-ssh-keys to let the Azure CLI handle it, OR you can specify an already existing SSH key with --ssh-key-value.
I used Git Bash to generate a new SSH key pair in the C:\Users\Admin Account\.ssh\ directory:
ssh-keygen -f ~/.ssh/aks-ssh
Now create the Azure AKS cluster while pointing to this new SSH key with:
az aks create \
--resource-group YourResourceGroup \
--name YourClusterName \
--node-count 3 \
--kubernetes-version 1.16.8 \
--ssh-key-value ~\.ssh\aks-ssh.pub
And you are good to go!
Just verified today using az cli in Powershell for version 2.0.31. You might need to first run the az group and then create az aks command. Screenshot for your reference.
In my use case I am using single ec2 instance [not a cluster]. I want to create a database and an user with all privileges programmatically? Is there a config file which I can edit and copy to the right location after influxdb is installed.
Could someone help me with this?
There isn't any config option that you can use to do that with InfluxDB itself. After starting up an instance you can use the InfluxDB HTTP to create the users. The curl command to do so would be the following:
curl "http://localhost:8086/query" --data-urlencode "q=CREATE USER myuser WITH PASSWORD 'mypass' WITH ALL PRIVILEGES"
Just run this command for each of the users you'd like to create. After that, you'll need to enabled the auth value of the [http] section of the config.
you can use ansible to setup influxb with your own recipe.
here's the ansible module documentation that you can use
http://docs.ansible.com/ansible/influxdb_database_module.html
or, any config/deploy manager that you prefer. i'd do this anyday instead of some ssh script or who knows what.
https://forge.puppet.com/tags/influxdb
chef.
https://github.com/bdangit/chef-influxdb
and also, you can use any of the above config managers to provision/manipulate your ec2 instance(s).
Use the admin token and this command (InfluxDB 2.3 CLI)
.\influx.exe user create -n yourusername -p yourpassword -o "your org name" --token admintokengoeshere
I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".
I have specified my credentials with the aws configure command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?
This is my script
#!/bin/bash
AWS_CONFIG_FILE="~/.aws/config"
echo $1
sudo mkfs -t ext4 $1
sudo mkdir /s3-backup-test
sudo chmod -R ugo+rw /s3-backup-test
sudo mount $1 /s3-backup-test
sudo aws s3 sync s3://backup-test-s3 /s3-backup/test
du -h /s3-backup-test
ipt (short version):
Thanks for any help!
sudo will change the $HOME directory (and therefore ~) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.
Make sure you did sudo aws configure for example. And try
sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'
You might prefer to remove all the sudo from inside the script, and just sudo the script itself.
While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.
Run this command to see if your credentials have been set:aws configure list
To set the credentials, run this command: aws configure and then enter the credentials that are specified in your ~/.aws/credentials file.
The unable to locate credentials error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.
Notice that you don't need to fill all the credentials via aws configure each time - you just need to reference to the relevant profile that was configured once.
From the Named profiles section in AWS docs:
The AWS CLI supports using any of multiple named profiles that are
stored in the config and credentials files. You can configure
additional profiles by using aws configure with the --profile option,
or by adding entries to the config and credentials files.
The following example shows a credentials file with two profiles. The
first [default] is used when you run a CLI command with no profile.
The second is used when you run a CLI command with the --profile user1
parameter.
~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows):
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[user1]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
So, after setting up the specific named profile (user1 in the example above) via aws configure or directly in the ~/.aws/credentials file you can select the specific profile:
aws ec2 describe-instances --profile user1
Or export it to terminal:
$ export AWS_PROFILE=user1
Answering in case someone stumbles across this based on the question's title.
I had the same problem where by the AWS CLI was reporting unable to locate credentials.
I had removed the [default] set of credentials from my credentials file as I wasn't using them and didn't think they were needed. It seems that they are.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws on a specific user, and tested using sudo -H -u thatuser aws ..., but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-east-1 config_file ~/.aws/config
I had to upgrade it using pip install awscli, which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:
% sudo -H -u thatuser aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************WXYZ shared-credentials-file
secret_key ****************wxyz shared-credentials-file
region us-east-1 config-file ~/.aws/config
A foolish and cautionary tail of a rusty script slinger:
I had defined the variable HOME in my script as a place were the script should go to build the platform.
This variable overwrote the env var that defines the shell users $HOME. So the AWS command could not find ~/.aws/credentials because ~ was referencing the wrong place.
I hate to admit it, but I hope it helps saves someone some time.
Was hitting this error today when running aws cli on EC2. My situations is I could get credentials info when running aws configure list. However I am running in a corporate environment that doing things like aws kms decrypt requires PROXY. As soon as I set proxy, the aws credentials info will be gone.
export HTTP_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
export HTTPS_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
Turns out I also have to set NO_PROXY and have the ec2 metadata address in the list 169.254.169.254. Also, since you should be going via an s3 endpoint, you should normally have .amazonaws.com in the no_proxy too.
export NO_PROXY=169.254.169.254,.amazonaws.com
If you are using a .aws/config file with roles ensure sure your config file is correctly formatted. In my case I had forgotten to put the role_arn = in front of the arn. The default profile sits in the .aws/credentials file and contains the access key id and secret access key of the iam identity.
The config file contains the role details:
[profile myrole]
role_arn = arn:aws:iam::123456789012:role/My-Role
source_profile = default
mfa_serial = arn:aws:iam::987654321098:mfa/my-iam-identity
region=ap-southeast-2
You can quickly test access by calling
aws sts get-caller-identity --profile myrole
If you have MFA enabled like I have you will need to enter it when prompted.
Enter MFA code for arn:aws:iam::987654321098:mfa/my-iam-identity:
{
"UserId": "ARABCDEFGHIJKLMNOPQRST:botocore-session-15441234567",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/My-Role/botocore-session-15441234567"
}
I ran into this trying to run an aws-cli command from roots cron.
Since credentials are stored in $HOME/.aws/credentials and I initialized aws-cli through sudo, $HOME is still /home/user/. When running from cron, $HOME is /root/ and thus cron cannot find the file.
The fix was to change $HOME for the specific cron job. Example:
00 12 * * * HOME=/home/user aws s3 sync s3://...
(alternatives includes moving, copying or symlinking the .aws dir, from /home/user/ to /root/)
try adding sudo with aws command like sudo aws ec2 command and yes as meuh mentioned the awscli needs to be configured using sudo
pip install --upgrade awscli
or
pip3 install --upgrade awscli