Error trying update route53 with awscli and powershell - aws-cli

I am very new at awscli and programming in general. I am tyring to update a route53 record when i start up an instance using cmd and powershell. However, i keep getting this error when running it:
Error parsing parameter '--change-batch': Expected: '=', received: 'ÿ' for input:
ÿþ{
The commmand i am running is :
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=file://C:\temp\config.json
I have tried just about all combinations like:
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch='C:\temp\config.json'
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=file://C:\temp\config.json
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=C:\temp\config.json
But nothing seems to work. If i put the config.json file in my home directory on a ubuntu vm and run the same command it works, so i am pretty sure my problem is with the --change-batch stuff.
Any help would be very much appropriated as i have been working on this for a couple of days.
I keep getting this when try to run it:
Error parsing parameter '--change-batch': Expected: '=', received: 'ÿ' for input:
ÿþ{

$ aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch file://C:\temp\config.json
The parameters change-batch doesn't accept value with =. The above command should work for you.
Ref: Route53 - Change Resource Record Set

Related

Why am I having AWS credential errors in my AWS terminal setup?

Wanted to learn AWS and found the tutorial Build a Serverless Web Application. In my research the closest Q&A I could find for my issue was Unable to locate credentials aws cli.
My process has been:
Created a repo in Github
Navigated to IAM and created a user trainer. Tutorial didn't specify policies so chose AdministratorAccess. Per instructions went the Security credentials and Create access key. Downloaded the file locally.
Went to Configuration basics and did Importing a key pair via .CSV file with the command of:
aws configure import --csv file:///Users/path/to/file/aws-training.csv
params:
User name: trainer
Access key ID: ****57
Secret access key: *****1b
but then found that the file didn't contain region or format so did:
aws configure --profile trainer
and re-did all values based on the CSV (Quick Setup):
AWS Access Key ID: ****57
AWS Secret Access Key: *****1b
Default region name: us-east-1
Default output format: json
Made sure to reboot my terminal and locally in a directory I run the command:
aws s3 cp s3://wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website ./ --recursive
The terminal has a delay then throws:
fatal error: Unable to locate credentials
Research
Q&As I've read through to try and see if I could diagnose the problem:
aws cli with shell script: upload failed: Unable to locate credentials
Bash with AWS CLI - unable to locate credentials
Unable to locate credentials aws cli
Unable to locate credentials in boto3 AWS
Get "fatal error: Unable to locate credentials" when I'm copying file from S3 to EC2 using aws cli
Unable to locate credentials when trying to copy files from s3-bucket to my ec2-instance
How can I resolve my error of Unable to locate credentials and what am I doing wrong or misunderstanding?
Per the comment:
Check the content of ~/.aws/credentials and ~/.aws/config
credentials
command:
nano ~/.aws/credentials
renders:
[training]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
[trainer]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
config
command:
nano ~/.aws/config
renders:
[profile training]
region = us-east-1
output = json
[profile trainer]
region = us-east-1
output = json
You've configured the profile with the name trainer. You didn't create a default profile, you created a named profile. You're getting the current error because the CLI tool is looking for a default profile, and you don't have one configured.
In order to use the trainer profile you either have to add --profile trainer to every aws command you run in the command line, or you need to set the AWS_PROFILE environment variable inside your command line environment:
export AWS_PROFILE=trainer
It looks like you also tagged this with nodejs, so I recommend going the environment variable route, which will also work with the nodeJS AWS SDK.

How do I solve "ResourceInitializationError" error for Task execution in ECS on Fargate?

What I want to do
I want to create Node.js (built with Nest.js) server in the infrastructure as follows:
infra-structure-image
GitHub repo is here.
Notice:
ECS is settled in private subnet.
I want to use private link to connect with AWS services (ECR and S3 in my case) rather than NAT gateway in public subnet.
Infrastructure is built from CloudFormation stack in AWS CDK Toolkit.
Node.js server is a simple app that responses 'Hello World!'.
Current behavior
When I deploy the AWS CloudFormation stack with cdk deploy, it is stuck in the ECS service creation at CREATE_IN_PROGRESS state. I can see ECS task execution error logs in ECS management console as follows:
STOPPED (ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.ap-northeast-1.amazonaws.com/: dial tcp 99.77.62.61:443: i/o timeout)
If I don't delete stack or set minimum number of task to 0, ECS service continuously try to execute tasks for hours and finally get timeout error.
I have already checked some points based on this official article.
Create VPC endpoints (com.amazonaws.region.ecr.dkr, com.amazonaws.region.ecr.api, S3)
Configure VPC endpoints (security group, subnets to settle in, IAM policy)
Add permissions to ECS task execution role so that ECS can pull image from ECR
Check if the image exists in ECR
And I have checked 'hello world' with this docker image in local machine.
Reproduction Steps
A minimal GitHub repo is here.
$ git clone https://github.com/Fanta335/cdk-ecs-nest-app
$ cd cdk-ecs-nest-app
$ npm install
AWS CDK toolkit is used in this project, so you need to run npm install -g aws-cdk if you have not installed AWS CDK toolkit in your local machine.
And if you have not set default IAM user configuration in aws cli, you need to run aws configure in order to pass environment variables to the CloudFormation stack.
$ cdk deploy
Then the deployment should be stuck.
Versions
MacOS Monterey 12.6
AWS CDK cli 2.43.1 (build c1ebb85)
AWS cli aws-cli/2.7.28 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off
Docker version 20.10.17, build 100c701
Nest cli 9.1.3
The problem was DNS resolution has not been enabled in ECR VPC endpoints. I should have set privateDnsEnabled: true manually to the InterfaceVpcEndpoint instances in cdk-ecs-nest-app-stack.ts file as follows:
const ECSPrivateLinkAPI = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkAPI", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.api`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
const ECSPrivateLinkDKR = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkDKR", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.dkr`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
According to the CDK docs, the default value of privateDnsEnabled is defined by the service which uses this VPC endpoint.
privateDnsEnabled?
Type: boolean (optional, default: set by the instance of IInterfaceVpcEndpointService, or true if not defined by the instance of IInterfaceVpcEndpointService)
I didn't checked the default privateDnsEnabled values of com.amazonaws.${REGION}.ecr.api and com.amazonaws.${REGION}.ecr.dkr but we have to set true manually in CDK Toolkit.

Starting an AWS EC2 instance via Python

I have been trying to start an already launched EC2 instance via python. I have configured AWS CLI from command prompt using the command below
aws configure
aws_access_key_id = MY_ACCESS_KEY
aws_secret_access_key = MY_SECRET_KEY
region=us-west-2b
output=Table
Now I used the following code from Spyder IDE of Anaconda
import boto3
instanceID = 'i-XXXXXXXXXXad'
ec2 = boto3.client('ec2', region_name='us-west-2b')
ec2.start_instances(InstanceIds=['i-XXXXXXXXXad'])
This gives the following error
EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-west-2b.amazonaws.com/"
I have been trying to debug the error for hours now, any kind of help will be useful. Also, I have a .pem as well as .ppk file created to start the instance via Putty, the .ppk file also has a paraphrase, do I need to do any kind of additional steps for this?
region=us-west-2b
is not a region, it is an availability zone. Try:
region=us-west-2
You can test by:
$ host ec2.us-west-2b.amazonaws.com
Host ec2.us-west-2b.amazonaws.com not found: 3(NXDOMAIN)
$ host ec2.us-west-2.amazonaws.com
ec2.us-west-2.amazonaws.com has address 54.240.251.131

Create DNS in lightsail entry using aws cli

Does anyone have an example of how to create a dns entry, for a lightsail hosted domain, using the aws cli?
I haven't been able to find an example of the format for the --domain-entry parameter of the create-domain-entry sub-command.
I made use of Mike's syntax to create a TXT record for DMARC. (Thank you Mike!)
I'd been trying to create it in the UI. I kept getting this error: Input error: Target should be enclosed in quotation marks: ""v=DMARC1; p=none; rua="mailto:dmarc#YOURDOMAINNAME.com"".
After trying several times with different recommended quote configurations, I bailed on the UI, and used Mike's syntax in a bash script. In my case, I also removed the extra quotes I had around the email address inside the rua portion. This may have been the source of my errors in the UI.
Here's what successfully created the DMARC record for me:
#!/usr/bin/bash
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'YOURDOMAINNAME.com' \
--domain-entry '{"name":"_dmarc.YOURDOMAINNAME.com","target":"\"v=DMARC1; p=none; rua=mailto:dmarcreports#YOURDOMAINNAME.com\"","isAlias":false,"type":"TXT"}'
Of course, replace YOURDOMAINNAME with your domain name, and the mailto name with the email at which you want to receive DMarc reports.
The command below will create an A record using the CLI
aws lightsail create-domain-entry \
--domain-name mikegcoleman.com \
--region us-east-1 --domain-entry \
name=blog.mikegcoleman.com,target=52.40.235.176,isAlias=false,type=A
Note that you need to specify the region as all domain actions with the Lightsail CLI need to be performed against us-east-1
For a TXT record the following should work. I think there is some funkiness with the CLI that it doesn't like the inline domain entry, and needs the JSON to do the TXT record, so it's formatted difrerently from above:
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'mikegcoleman.com' \
--domain-entry '{"name":"test.mikegcoleman.com","target":"\"response\"","isAlias":false,"type":"TXT"}'
Yes!
The answer from #binarybelle to create a BASH script and add the command as the JSON version worked for me too in order to add a TXT entry for DKIM.
The extra trick with a long DKIM entry is to split the text key into 2 parts, so lots of escaping the extra double-quotes :-)
#!/bin/bash
/usr/local/bin/aws lightsail --region us-east-1 \
create-domain-entry --domain-name 'mydomain.co.uk' \
--domain-entry '{"name":"default._domainkey.mydomain.co.uk","target":"\"v=DKIM1; h=sha256; k=rsa; \" \"p=MIIBIjxxxxxxxxxxxiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAurVgfLc8xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9cRHBTEOIR4lmIgatpit\" \"t+v7oQzngmfKpBNoTeyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxQIDAQAB\"","isAlias":false,"type":"TXT"}'

AWS EC2 spot instance --block-duration-minutes not working

I was trying to request a spot instance using CLI. I used below command to request a spot instance:
aws ec2 request-spot-instances --spot-price "0.050" --instance-count 1 --block-duration-minutes 120 --type "one-time" --launch-specification file://Spot_P2_request.json --query 'SpotInstanceRequests[*].SpotInstanceRequestId' --output text
I get below error:
Unknown options: --block-duration-minutes, 120
Is block-duration-minutes not supported by CLI?

Resources