If you uses the AWS console or even the command line, you won't get any issue in setting a default keypair to your Elasticbeanstalk environment.
But you do if using boto3.
Surprisingly, there's no any single mention about setting a keypair in the official boto3 documentation for elasticbeanstalk: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html.
Tried also to create a zip file containing the most basic files to make a simple website works. And supposedly, I can set a keypair name in the .elasticbeanstalk/config.yml". I did in this way:
branch-defaults:
default:
environment: app10-env
group_suffix: null
global:
application_name: app10
branch: null
default_ec2_keyname: main4
default_platform: PHP 7.4 running on 64bit Amazon Linux 2
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
repository: null
sc: null
workspace_type: null
Yes, the "main4" exists in my AWS account. But creating an environment to my application with a zip containing it, it seems that it have no effect at all. After my environment has sucessfully deployed, I can check afterwards through console and see that have no keypair setted to environment. I need to go to a further step on console to set the keypair and await a new environment deployiment to perform the update.
Is there a real issue with the boto3 elasticbeanstalk when dealing with environment keypairs or I am doing something wrong?
I would set the OptionSettings when calling create_environment or include the keyname in the .ebextensions. Boto3 is not reading the EB CLI default config you are using i guess.
Refs
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html#ElasticBeanstalk.Client.create_environment
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration-methods-before.html
Option to set
Namespace: aws:autoscaling:launchconfiguration
Option Names: IamInstanceProfile, EC2KeyName, InstanceType
The response of #f7o is not accurate, but helped to solve the problem.
There's no option for setting an keypair using "create_environment" command from boto3 client. Tried to use a "EC2KeyName", but it returned an exception of invalid value.
But using the "ebextensions" do the work. If someone else are interested in do the same task that I am, so everything that is needed to do is create a folder called ".ebextensions" with a file called "customkey.config" (the file name can be anything, but must be suffixed with .config), and with the following content:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: EC2KeyName
value: <your_keypair_name>
Related
We have a small collection of Kubernetes pods which run react/next.js UIs in a node 16 alpine container (node:16.18.1-alpine3.15 to be precise). All of this runs in AWS EKS 1.23. We make use of annotations on these pods in order to inject secrets from Hashicorp Vault at start up. The annotations pull the desired secrets from Vault and write these to a file on the pod. Example of said annotations below :
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/role: "onejourney-ui"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/onejourney-ui"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/onejourney-ui" -}}
export AUTH0_CLIENT_ID="{{ .Data.data.auth0_client_id }}"
export SENTRY_DSN="{{ .Data.data.sentry_admin_dsn }}"
{{- end }}
When the pod starts up, we source this file (which is created by default at /vault/secrets/config) to set environment variables and then delete the file. We do that with the following pod arguments in our helm chart :
node:
args:
- /bin/sh
- -c
- source /vault/secrets/config; rm -rf /vault/secrets/config; yarn start-admin;
We recently upgraded some of AWS EKS clusters from 1.23 to 1.24. After doing so, we noted that our node applications were failing to start and entering a crash loop. Looking in the logs of these containers, the problem seemed to be that the pod was unable to locate the secrets file anymore.
Interestingly, the Vault init container completed successfully and shows that the file was successfully created...
Out of curiosity, I removed the node args to source the file which allowed the container to start successfully, but I found when execing into the pod, the file WAS infact present and had the content I was expecting. The file also had the correct owner and permissions as we see in a good working instance in EKS 1.23.
We have other containers (php-fpm) which consume secrets in the same manner however these were not affected on 1.24, only node containers were affected. There were no namespace, pod or deployment annotations I saw added which would have been a possible cause. After rolling the cluster back down to EKS 1.23, the deployment worked as expected.
I'm left scratching my head as to why the pod is unable to source that file on 1.24. Any suggestions on what to check or a possible cause would be greatly appreciated.
Wanted to learn AWS and found the tutorial Build a Serverless Web Application. In my research the closest Q&A I could find for my issue was Unable to locate credentials aws cli.
My process has been:
Created a repo in Github
Navigated to IAM and created a user trainer. Tutorial didn't specify policies so chose AdministratorAccess. Per instructions went the Security credentials and Create access key. Downloaded the file locally.
Went to Configuration basics and did Importing a key pair via .CSV file with the command of:
aws configure import --csv file:///Users/path/to/file/aws-training.csv
params:
User name: trainer
Access key ID: ****57
Secret access key: *****1b
but then found that the file didn't contain region or format so did:
aws configure --profile trainer
and re-did all values based on the CSV (Quick Setup):
AWS Access Key ID: ****57
AWS Secret Access Key: *****1b
Default region name: us-east-1
Default output format: json
Made sure to reboot my terminal and locally in a directory I run the command:
aws s3 cp s3://wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website ./ --recursive
The terminal has a delay then throws:
fatal error: Unable to locate credentials
Research
Q&As I've read through to try and see if I could diagnose the problem:
aws cli with shell script: upload failed: Unable to locate credentials
Bash with AWS CLI - unable to locate credentials
Unable to locate credentials aws cli
Unable to locate credentials in boto3 AWS
Get "fatal error: Unable to locate credentials" when I'm copying file from S3 to EC2 using aws cli
Unable to locate credentials when trying to copy files from s3-bucket to my ec2-instance
How can I resolve my error of Unable to locate credentials and what am I doing wrong or misunderstanding?
Per the comment:
Check the content of ~/.aws/credentials and ~/.aws/config
credentials
command:
nano ~/.aws/credentials
renders:
[training]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
[trainer]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
config
command:
nano ~/.aws/config
renders:
[profile training]
region = us-east-1
output = json
[profile trainer]
region = us-east-1
output = json
You've configured the profile with the name trainer. You didn't create a default profile, you created a named profile. You're getting the current error because the CLI tool is looking for a default profile, and you don't have one configured.
In order to use the trainer profile you either have to add --profile trainer to every aws command you run in the command line, or you need to set the AWS_PROFILE environment variable inside your command line environment:
export AWS_PROFILE=trainer
It looks like you also tagged this with nodejs, so I recommend going the environment variable route, which will also work with the nodeJS AWS SDK.
I am attempting to launch a NodeJS app on AWS direct link to guide here:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.html
Screen capture:
When running the git commit -m "First express app" command I always get this error (everything else works fine up until that point)
ERROR: This branch does not have a default environment. You must
either specify an environment by typing "eb deploy my-env-name" or set
a default environment by typing "eb use my-env-name".
If you have overcome a similar experience or can shed light it is more welcome.
Here is my solution
I had the environment in my config but had to call it
inside .ebextensions/config.yml is the following:
branch-defaults:
default:
environment: node-express-env
group_suffix: null
global:
application_name: my_app_name
branch: null
default_ec2_keyname: null
default_platform: node.js
default_region: us-east-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: eb-cli
repository: null
sc: git
workspace_type: Application
As such when I modified my command from
eb deploy
to
eb deploy node-express-env
it worked.
There is more information on the AWS docs:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli-troubleshooting.html
Solution: Run eb list to see a list of available environments. Then run eb use env-name to use one of the available environments.
I'm deploying a Django based project on AWS Elastic Beanstalk.
I have been following the Amazon example, where I add my credentials (ACCESS_KEY/SECRET) to my app.config under the .ebextentions directory.
The same config file has:
container_commands:
01_syncdb:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
leader_only: true
Problem is that this is forcing me to store my credentials under Version Control, and I will like to avoid that.
I tried to remove the credentials and then add them with eb setenv, but the problem is that the two django commands require the these settings to be set on the environment.
I'm using the v3 cli:
eb create -db -c foo bar --profile foobar
where foobar is the name of the profile under ~/.aws/credentials, and where I want to keep my secret credentials.
What is the best security practices for the AWS credentials using EB?
One solution is to keep the AWS credentials, but create a policy that ONLY allows them to POST objects on the one bucket used for /static.
I ended up removing the collecstatic step from the config file, and simply take care of uploading statics on the build side.
After that, all credentials can be removed and all other boto commands will grab the credentials from the security role on the EC2 instance.
I added a scripts.config file to .ebextensions at the root of my Node app deployed in beanstalk.I did not see the tags for the EC2 instances in the console. Nor did I see any mention of 1_add_tags in beanstalk logs. What did I do wrong and how do I find out if the commands in the script.config were called at all!
The config file in .ebextensions is as follows ....
01_add_tags:
command: ec2-create-tags $(ec2-metadata -i | cut -d ' ' -f2) --tag Environment=Production --tag Name=Proxy-Server --tag Application=something
env:
EC2_HOME: /opt/aws/apitools/ec2
EC2_URL: https://ec2.ap-southeast-2.ama...
JAVA_HOME: /usr/lib/jvm/jre
PATH: /bin:/usr/bin:/opt/aws/bin/
Cheers,
Prabin
Amazon's answer to the problem. (This worked for me) ...
You can utilise the ebextensions to execute certain commands on instance boot.
Supposing that you want to implement this on Linux based containers. I have formulated a sample config file for you and attached to this case.
Please follow below guidelines :
In the AWS Management console, check the IAM Role/Instance profile used by beanstalk. By default it uses "aws-elasticbeanstalk-ec2-role". Add permissions for this role to create new tags (ec2:CreateTags).
If you do not have ".ebextensions" folder at the root of your application or the "WEB-INF" folder, then create the folder.
Modify the key value pairs in the config file. Multiple pairs are separated by a space.
A sample snippet is as below:
{
"container_commands": {
"01_add_tags": {
"command": "aws ec2 create-tags --resources $(GET http://169.254.169.254/latest/meta-data/instance-id) --tags Key=ClientName,Value=testClient Key=NewTag,Value=new-value --region us-east-1"
}
}
}
Add the modified config file in the ".ebextensions" folder.
Upload this version to beanstalk. It should launch new instances and execute the config file.
Please give it sometime, preferably till the instances pass EC2 instance status checks. Refresh the page for the additional tags to be displayed.
Please note that we are using "Container_commands" instead of "Command" used in the blog.
Container Commands run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. This is important as these commands have access to environment variables such as your AWS security credentials set by the instance-profile.
I would recommend you to go through the restrictions for AWS Resources tagging mentioned at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions
I would like to highlight that maximum number of tags per resource is 10.
Also check the table for tagging support for certain resource. For example, currently tagging is not supported for ELB.
I had the similar problem where I tried to install libjpeg using the ./ebextensions/foo.config file. I tried everything but was never able to find a good solution.
I was able to solve it though, by setting up a completely new Elastic Beanstalk Application and then deploying my same version on the new instance instead. When I did this everything was installed perfectly and working fine.
Check out my answers here:
https://stackoverflow.com/a/23109410/2335675
https://stackoverflow.com/a/23131959/2335675
Hope this fixes your issues as well.