Create DNS in lightsail entry using aws cli - dns

Does anyone have an example of how to create a dns entry, for a lightsail hosted domain, using the aws cli?
I haven't been able to find an example of the format for the --domain-entry parameter of the create-domain-entry sub-command.

I made use of Mike's syntax to create a TXT record for DMARC. (Thank you Mike!)
I'd been trying to create it in the UI. I kept getting this error: Input error: Target should be enclosed in quotation marks: ""v=DMARC1; p=none; rua="mailto:dmarc#YOURDOMAINNAME.com"".
After trying several times with different recommended quote configurations, I bailed on the UI, and used Mike's syntax in a bash script. In my case, I also removed the extra quotes I had around the email address inside the rua portion. This may have been the source of my errors in the UI.
Here's what successfully created the DMARC record for me:
#!/usr/bin/bash
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'YOURDOMAINNAME.com' \
--domain-entry '{"name":"_dmarc.YOURDOMAINNAME.com","target":"\"v=DMARC1; p=none; rua=mailto:dmarcreports#YOURDOMAINNAME.com\"","isAlias":false,"type":"TXT"}'
Of course, replace YOURDOMAINNAME with your domain name, and the mailto name with the email at which you want to receive DMarc reports.

The command below will create an A record using the CLI
aws lightsail create-domain-entry \
--domain-name mikegcoleman.com \
--region us-east-1 --domain-entry \
name=blog.mikegcoleman.com,target=52.40.235.176,isAlias=false,type=A
Note that you need to specify the region as all domain actions with the Lightsail CLI need to be performed against us-east-1
For a TXT record the following should work. I think there is some funkiness with the CLI that it doesn't like the inline domain entry, and needs the JSON to do the TXT record, so it's formatted difrerently from above:
aws lightsail --region us-east-1 \
create-domain-entry \
--domain-name 'mikegcoleman.com' \
--domain-entry '{"name":"test.mikegcoleman.com","target":"\"response\"","isAlias":false,"type":"TXT"}'

Yes!
The answer from #binarybelle to create a BASH script and add the command as the JSON version worked for me too in order to add a TXT entry for DKIM.
The extra trick with a long DKIM entry is to split the text key into 2 parts, so lots of escaping the extra double-quotes :-)
#!/bin/bash
/usr/local/bin/aws lightsail --region us-east-1 \
create-domain-entry --domain-name 'mydomain.co.uk' \
--domain-entry '{"name":"default._domainkey.mydomain.co.uk","target":"\"v=DKIM1; h=sha256; k=rsa; \" \"p=MIIBIjxxxxxxxxxxxiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAurVgfLc8xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx9cRHBTEOIR4lmIgatpit\" \"t+v7oQzngmfKpBNoTeyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxQIDAQAB\"","isAlias":false,"type":"TXT"}'

Related

Trouble creating Velero storage location with storage access key

i'm trying to use Velero to backup an AKS cluster but for some reason i'm unable to set the backup location in velero.
i'm getting the error below
I can confirm the credentials-velero file I have obtains the correct storage access key, and the secret (cloud-credentials) reflects it as well.
Kind of at a lost as to why it's throwing me this error. Never used Velero before.
EDIT:
So I used the following commands to get the credential file:
Obtain the Azure Storage account access key
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name storsmaxdv --query "[?keyName == 'key1'].value" -o tsv`
then I create the credential file
cat << EOF > ./credentials-velero
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF
then my install command is:
./velero install \
--provider azure
--plugins velero/velero-plugin-for-microsoft-azure:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
--use-volume-snapshots=false
I can verify Velero created a secret called cloud-credentials, and when I decrypt it with base64 I'm able to see what looks like the contents of my credentials-velero file. for example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
turns out it was the brackets in the install command that was causing the issue
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
removed the brackets to this:
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY,subscriptionId=numbersandlettersandstuff \
and now it works
not sure how your cred file formatting is and the command you are running.
Please try the below file and update the command as per need.
Example command :
./velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 --bucket velero-cluster-backups --backup-location-config resourceGroup=STORAGE-ACCOUNT-RESOURCEGROUP,storageAccount=STORAGEACCOUNT --use-volume-snapshots=false --secret-file ./credentials-velero
Cred file
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
i would suggest checking out the secret that is getting created into the K8s cluster and check the formatting of that secret and data.
Refer more here : https://github.com/vmware-tanzu/velero/issues/2272
Check this plugin : https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
1: Create servce pricple for velero in azure ad
you can create credential file in below format
AZURE_CLOUD_NAME=AzurePublicCloud
AZURE_SUBSCRIPTION_ID=*************
AZURE_TENANT_ID=**************
AZURE_CLIENT_ID=********
AZURE_CLIENT_SECRET=**********
AZURE_RESOURCE_GROUP=(name of your cluster resorce group where your pvc reside)

Setting EC2 Environment Variables with CodeDeploy, Parameter Store and PM2

I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env

Azure VM extension update failure

I tried to add a custom script to VM through extensions. I have observed that when vm is created, Microsoft.Azure.Extensions.CustomScript type is created with name "cse-agent" by default. So I try to update extension by encoding the file with script property
az vm extension set \
--resource-group test_RG \
--vm-name aks-agentpool \
--name CustomScript \
--subscription ${SUBSCRIPTION_ID} \
--publisher Microsoft.Azure.Extensions \
--settings '{"script": "'"$value"'"}'
$value represents the script file encoded in base 64.
Doing that gives me an error:
Deployment failed. Correlation ID: xxxx-xxxx-xxx-xxxxx.
VM has reported a failure when processing extension 'cse-agent'.
Error message: "Enable failed: failed to get configuration: invalid configuration:
'commandToExecute' and 'script' were both specified, but only one is validate at a time"
From the documentation, it is mentioned that when script attribute is present,
there is no need for commandToExecute. As you can see above I haven't mentioned commandToExecute, it's somehow taking it from previous extension. Is there a way to update it without deleting it? Also it will be interesting to know what impact will cse-agent extension will create when deleted.
FYI: I have tried deleting 'cse-agent' extension from VM and added my extension. It worked.
the CSE-AGENT vm extension is crucial and manages all of the post install needed to configure the nodes to be considered a valid Kubernetes nodes. Removing this CSE will break the VMs and will render your cluster inoperable.
IF you are interested in applying changes to nodes in an existing cluster, while not officially supported, you could leverage the following project.
https://github.com/juan-lee/knode
This allows you to configure the nodes using a DaemonSet, which helps when you node pools have the auto-scaling feature enabled.
for simple Node alteration of the filesystem, a privilege pod with host path will also work
https://dev.to/dannypsnl/privileged-pod-debug-kubernetes-node-5129

how to provide a file content as an aws cli option value

I am trying to create an SFTP user with the help of AWS CLI in my Linux Box.
Below is the AWS CLI command which I am passing in my bash script (my ssh public key is in a file, with the help of variable I am passing same into AWS CLI options section)
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
Below is the ERROR which I am facing while executing above commands:
[developer#dev-lin demo]$ aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: developer#dev-lin.domain.com, XXXXXXXXXXAB3NzaC1yc2EAAAADAQABAAABAQCm2hI3Y33K1GVbdQV0lfkm/klZRJS7Kcz8+53e/BoIbVMFH0jqm1aejELDFgPnN7HvIZ/csYGzF/ssTx5lXVaHQh/qkYwfqQBg8WvXVB0Jmogj1hr6z5M8Qy/3oCx0fSmh6e/Ekfk8vHhiHQlGZV3o8a2AW5SkP8IH/OgT6Bq+SMuB+xtSciVBZqSLI0OgYtOZ0MyxBzfLau1Tyegu5lVFevZDVjecnIaS4l+v2VIQ/OgaZ40oAI3NuRZ2EdnLqEqFyLjasx4kcuwNzD5oaXAU6T9UsqKN2rVLMKrXXXXXXXXXXX
Am I missing something bash syntax while passing option value!
UPDATE 30-March-2020
as per suggestions in below comments, I have added AWS ARN Role in command, now facing different issue than previous
CODE:
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body "$customer_name_pub_value" --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role"
ERROR
An error occurred (ValidationException) when calling the CreateUser operation: 1 validation error detected: Value 'script-test/power-archive-ireland/demo/' at 'homeDirectory' failed to satisfy constraint: Member must satisfy regular expression pattern: ^$|/.*
Yes, for the final bug, you should feed it as a list of objects:
--tags [{Key="Product", Value="demo"}, {Key="Environment", Value="dev"}, {Key="Contact", Value="dev.user#domain.com"}, {Key="Service", Value="sftp"
You may need to put "Key" and "Value" in quotes or even perhaps have to try key:value pairs (i.e. {"Product": "demo"}), but this should be the general syntax.
Below is the final working CLI command:
Changes
Added ROLE ARN (Thanks #user1394 for the suggestion)
Biggest issue resolved by placing / before --home-directory option (bad AWS documentation (https://docs.aws.amazon.com/cli/latest/reference/transfer/create-user.html) and their out-dated RegEx ^$|/.*)
Transform the broken CLI into JSON based CLI to fix the final bug (not all the tags were able to attach in old command)
#!/bin/bash
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user \
--user-name $customer_name \
--server-id s-aaabbbccc \
--role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role" \
--ssh-public-key-body "$customer_name_pub_value" \
--home-directory /script-test/power-archive-ireland/$customer_name \
--tags '[
{"Key": "Product", "Value": "demo"},
{"Key": "Environment", "Value": "dev"},
{"Key": "Contact", "Value": "dev.user#domain.com"},
{"Key": "Service", "Value": "sftp"}
]'

AWS Cloudsearch CLI with --query-options throwing error

I have a query which I am passing via the command line:
aws cloudsearchdomain --endpoint-url http://myendpt search --search-query value --return _all_fields --cursor initial --size 100 --query-options {"defaultOperator":"or","fields":["id"],"operators":["and","escape","fuzzy","near","not","or","phrase","precedence","prefix","whitespace"]} --query-parser simple --query-parser simple --profile myname
It responds with:
Unknown options: operators:[and, escape, fuzzy, near, not, or, phrase, precedence, prefix, whitespace], fields:[id]
I assure you that id field exists in AWS Cloudsearch. I reverse engineered the query in the online cloudsearch query tester to AWS CLI.
Please help.
Update:
This problem has been resolved in the updated aws-cli/1.8.4. If you are a ubuntu/linux user like me:
please do:
sudo pip uninstall awscli
sudo pip install awscli
aws --version
The solution for my ruby implementation of the aws-sdk, ver > 2
client = Aws::CloudSearchDomain::Client.new(endpoint:'http://yoururl')
resp = client.search({
cursor:"initial",
facet:"{\"facet_name_!\":{},\"mentions\":{}}",
query:"#{place_a_value_here}",
query_options:"{\"defaultOperator\":\"or\",\"fields\":[\"yourfield\"],\"operators\":[\"and\",\"escape\",\"fuzzy\",\"near\",\"not\",\"or\",\"phrase\",\"precedence\",\"prefix\",\"whitespace\"]}",
query_parser:"simple",
return:"_all_fields",
size:1000,
highlight:"{\"text\":{}}",
})
Summarizing the Asker's solution from the comments: the issue is that you have to double-quote your json param, and then either single-quote (') or escaped-double-quote (\") the json key/values within your param.
For example, both of these are valid
--query-options "{'defaultOperator':'and','fields':['name']}"
or
--query-options "{\"defaultOperator\":\"and\",\"fields\":[\"name\"]}"

Resources