awscli --api-name flag and usage for kinesisvideo - aws-cli

I am trying to use the aws cli to work with kinesis video streams.
according to the documentation:
aws kinesisvideo get-data-endpoint --stream-name mytestStream
should return the data endpoint for my stream but I get:
usage: aws [options] <command> <subcommand> [<subcommand> ...]
[parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --api-name is required
when I search for --api-name I can't seem to find any mention of this flag being needed for kinesisvideo?

From get-data-endpoint Command Reference:
get-data-endpoint
[--stream-name <value>]
[--stream-arn <value>]
--api-name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
--api-name (string)
The name of the API action for which to get an endpoint.
Possible values:
PUT_MEDIA
GET_MEDIA
LIST_FRAGMENTS
GET_MEDIA_FOR_FRAGMENT_LIST
GET_HLS_STREAMING_SESSION_URL

Related

Error trying update route53 with awscli and powershell

I am very new at awscli and programming in general. I am tyring to update a route53 record when i start up an instance using cmd and powershell. However, i keep getting this error when running it:
Error parsing parameter '--change-batch': Expected: '=', received: 'ÿ' for input:
ÿþ{
The commmand i am running is :
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=file://C:\temp\config.json
I have tried just about all combinations like:
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch='C:\temp\config.json'
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=file://C:\temp\config.json
aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch=C:\temp\config.json
But nothing seems to work. If i put the config.json file in my home directory on a ubuntu vm and run the same command it works, so i am pretty sure my problem is with the --change-batch stuff.
Any help would be very much appropriated as i have been working on this for a couple of days.
I keep getting this when try to run it:
Error parsing parameter '--change-batch': Expected: '=', received: 'ÿ' for input:
ÿþ{
$ aws route53 change-resource-record-sets --hosted-zone-id Z337IOSIXTUZ2M --change-batch file://C:\temp\config.json
The parameters change-batch doesn't accept value with =. The above command should work for you.
Ref: Route53 - Change Resource Record Set

Setting EC2 Environment Variables with CodeDeploy, Parameter Store and PM2

I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env

how to provide a file content as an aws cli option value

I am trying to create an SFTP user with the help of AWS CLI in my Linux Box.
Below is the AWS CLI command which I am passing in my bash script (my ssh public key is in a file, with the help of variable I am passing same into AWS CLI options section)
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
Below is the ERROR which I am facing while executing above commands:
[developer#dev-lin demo]$ aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: developer#dev-lin.domain.com, XXXXXXXXXXAB3NzaC1yc2EAAAADAQABAAABAQCm2hI3Y33K1GVbdQV0lfkm/klZRJS7Kcz8+53e/BoIbVMFH0jqm1aejELDFgPnN7HvIZ/csYGzF/ssTx5lXVaHQh/qkYwfqQBg8WvXVB0Jmogj1hr6z5M8Qy/3oCx0fSmh6e/Ekfk8vHhiHQlGZV3o8a2AW5SkP8IH/OgT6Bq+SMuB+xtSciVBZqSLI0OgYtOZ0MyxBzfLau1Tyegu5lVFevZDVjecnIaS4l+v2VIQ/OgaZ40oAI3NuRZ2EdnLqEqFyLjasx4kcuwNzD5oaXAU6T9UsqKN2rVLMKrXXXXXXXXXXX
Am I missing something bash syntax while passing option value!
UPDATE 30-March-2020
as per suggestions in below comments, I have added AWS ARN Role in command, now facing different issue than previous
CODE:
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body "$customer_name_pub_value" --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role"
ERROR
An error occurred (ValidationException) when calling the CreateUser operation: 1 validation error detected: Value 'script-test/power-archive-ireland/demo/' at 'homeDirectory' failed to satisfy constraint: Member must satisfy regular expression pattern: ^$|/.*
Yes, for the final bug, you should feed it as a list of objects:
--tags [{Key="Product", Value="demo"}, {Key="Environment", Value="dev"}, {Key="Contact", Value="dev.user#domain.com"}, {Key="Service", Value="sftp"
You may need to put "Key" and "Value" in quotes or even perhaps have to try key:value pairs (i.e. {"Product": "demo"}), but this should be the general syntax.
Below is the final working CLI command:
Changes
Added ROLE ARN (Thanks #user1394 for the suggestion)
Biggest issue resolved by placing / before --home-directory option (bad AWS documentation (https://docs.aws.amazon.com/cli/latest/reference/transfer/create-user.html) and their out-dated RegEx ^$|/.*)
Transform the broken CLI into JSON based CLI to fix the final bug (not all the tags were able to attach in old command)
#!/bin/bash
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user \
--user-name $customer_name \
--server-id s-aaabbbccc \
--role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role" \
--ssh-public-key-body "$customer_name_pub_value" \
--home-directory /script-test/power-archive-ireland/$customer_name \
--tags '[
{"Key": "Product", "Value": "demo"},
{"Key": "Environment", "Value": "dev"},
{"Key": "Contact", "Value": "dev.user#domain.com"},
{"Key": "Service", "Value": "sftp"}
]'

aws cli get media command

I am trying to use the amazon cli to get media from an active kinesis stream.
the command I am trying is:
aws kinesis-video-media get-media --stream-name testStream --start-selector '{ "StartSelectorType":"NOW" }' --endpoint-url 'https://<code>.kinesisvideo.ap-northeast-1.amazonaws.com'
but I get:
usage: aws [options] <command> <subcommand> [<subcommand> ...]
[parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: too few arguments
according to the docu, StartSelectorType is the only flag I really need?
Thanks
The get-media Command Reference says that you also need to provide an outfile:
get-media
[--stream-name <value>]
[--stream-arn <value>]
--start-selector <value>
outfile <value>
See also: Boto3 kinesis video stream: Error when calling the GetMedia operation

AWS Cloudsearch CLI with --query-options throwing error

I have a query which I am passing via the command line:
aws cloudsearchdomain --endpoint-url http://myendpt search --search-query value --return _all_fields --cursor initial --size 100 --query-options {"defaultOperator":"or","fields":["id"],"operators":["and","escape","fuzzy","near","not","or","phrase","precedence","prefix","whitespace"]} --query-parser simple --query-parser simple --profile myname
It responds with:
Unknown options: operators:[and, escape, fuzzy, near, not, or, phrase, precedence, prefix, whitespace], fields:[id]
I assure you that id field exists in AWS Cloudsearch. I reverse engineered the query in the online cloudsearch query tester to AWS CLI.
Please help.
Update:
This problem has been resolved in the updated aws-cli/1.8.4. If you are a ubuntu/linux user like me:
please do:
sudo pip uninstall awscli
sudo pip install awscli
aws --version
The solution for my ruby implementation of the aws-sdk, ver > 2
client = Aws::CloudSearchDomain::Client.new(endpoint:'http://yoururl')
resp = client.search({
cursor:"initial",
facet:"{\"facet_name_!\":{},\"mentions\":{}}",
query:"#{place_a_value_here}",
query_options:"{\"defaultOperator\":\"or\",\"fields\":[\"yourfield\"],\"operators\":[\"and\",\"escape\",\"fuzzy\",\"near\",\"not\",\"or\",\"phrase\",\"precedence\",\"prefix\",\"whitespace\"]}",
query_parser:"simple",
return:"_all_fields",
size:1000,
highlight:"{\"text\":{}}",
})
Summarizing the Asker's solution from the comments: the issue is that you have to double-quote your json param, and then either single-quote (') or escaped-double-quote (\") the json key/values within your param.
For example, both of these are valid
--query-options "{'defaultOperator':'and','fields':['name']}"
or
--query-options "{\"defaultOperator\":\"and\",\"fields\":[\"name\"]}"

Resources