I am trying to use the amazon cli to get media from an active kinesis stream.
the command I am trying is:
aws kinesis-video-media get-media --stream-name testStream --start-selector '{ "StartSelectorType":"NOW" }' --endpoint-url 'https://<code>.kinesisvideo.ap-northeast-1.amazonaws.com'
but I get:
usage: aws [options] <command> <subcommand> [<subcommand> ...]
[parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: too few arguments
according to the docu, StartSelectorType is the only flag I really need?
Thanks
The get-media Command Reference says that you also need to provide an outfile:
get-media
[--stream-name <value>]
[--stream-arn <value>]
--start-selector <value>
outfile <value>
See also: Boto3 kinesis video stream: Error when calling the GetMedia operation
Related
I'm doing pre-processing tasks using EC2.
I execute shell commands using the userdata variable. The last line of my userdata has sudo shutdown now -h. So the instance gets terminated automatically once the pre-processing task completed.
This is how my code looks like.
import boto3
userdata = '''#!/bin/bash
pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h
'''
def launch_ec2():
ec2 = boto3.resource('ec2',
aws_access_key_id="",
aws_secret_access_key="",
region_name='us-east-1')
instances = ec2.create_instances(
ImageId='ami-0c02fb55956c7d316',
MinCount=1,
MaxCount=1,
KeyName='',
InstanceInitiatedShutdownBehavior='terminate',
IamInstanceProfile={'Name': 'S3fullaccess'},
InstanceType='m6i.4xlarge',
UserData=userdata,
InstanceMarketOptions={
'MarketType': 'spot',
'SpotOptions': {
'SpotInstanceType': 'one-time',
}
}
)
print(instances)
launch_ec2()
The problem is, sometime when there is an error in my python script, the script dies and the instance get terminated.
Is there a way I can collect error/info logs and send it to cloudwatch before the instance get terminated? This way, I would know what went wrong.
You can achieve the desired behavior by leveraging bash functionality.
You could in fact create a log file for the entire execution of the UserData, and you could use trap to make sure that the log file is copied over to S3 before terminating if an error occurs.
Here's how it could look:
#!/bin/bash -xe
exec &>> /tmp/userdata_execution.log
upload_log() {
aws s3 cp /tmp/userdata_execution.log s3://... # use a bucket of your choosing here
}
trap 'upload_log' ERR
pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h
A log file (/tmp/userdata_execution.log) that contains stdout and stderr will be generated for the UserData; if there is an error during the execution of the UserData, the log file will be upload to an S3 bucket.
If you wanted to, you could of course also stream the log file to CloudWatch, however to do so you would have to install the CloudWatch agent on the instance and configure it accordingly. I believe that for your use case uploading the log file to S3 is the best solution.
I am deploying a Node.js app to EC2 using CodeDeploy. I am storing credentials within AWS Systems Manager, Parameter Store however cannot find a method to expose these to my application.
I am using PM2 for process management. I can successfully retrieve the parameter from the Parameter Store on the target machine, so there are no permission issues. For example:
aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value`
...successfully returns the correct string. I attempt to use this in my applicationStart.sh CodeDeploy file and start the app:
#!/bin/bash
export LOCAL_CACHE_PATH=$(aws ssm get-parameters --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value)
pm2 start ecosystem.config.js --env production
LOCAL_CACHE_PATH returns undefined in my app when accessing process.env.LOCAL_CACHE_PATH.
So the environment variable is available within the applicationStart.sh script and yet undefined when the app starts from that script.
I am looking for a recommended approach to use environment variables from the Parameter Store with CodeDeploy.
I have read literally dozens of posts on similar topics but cannot resolve it. Very much appreciate any guidance.
The solution I am using is to write the environment variables to a .env file and use that in my app:
afterInstall.sh:
echo LOCAL_CACHE_PATH=$(aws ssm get-parameters --output text --region us-east-1 --names LOCAL_CACHE_PATH --with-decryption --query Parameters[0].Value) >> /home/ubuntu/foo/.env
I am trying to create an SFTP user with the help of AWS CLI in my Linux Box.
Below is the AWS CLI command which I am passing in my bash script (my ssh public key is in a file, with the help of variable I am passing same into AWS CLI options section)
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
Below is the ERROR which I am facing while executing above commands:
[developer#dev-lin demo]$ aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body $customer_name_pub_value --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role customer-sftp-role
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: developer#dev-lin.domain.com, XXXXXXXXXXAB3NzaC1yc2EAAAADAQABAAABAQCm2hI3Y33K1GVbdQV0lfkm/klZRJS7Kcz8+53e/BoIbVMFH0jqm1aejELDFgPnN7HvIZ/csYGzF/ssTx5lXVaHQh/qkYwfqQBg8WvXVB0Jmogj1hr6z5M8Qy/3oCx0fSmh6e/Ekfk8vHhiHQlGZV3o8a2AW5SkP8IH/OgT6Bq+SMuB+xtSciVBZqSLI0OgYtOZ0MyxBzfLau1Tyegu5lVFevZDVjecnIaS4l+v2VIQ/OgaZ40oAI3NuRZ2EdnLqEqFyLjasx4kcuwNzD5oaXAU6T9UsqKN2rVLMKrXXXXXXXXXXX
Am I missing something bash syntax while passing option value!
UPDATE 30-March-2020
as per suggestions in below comments, I have added AWS ARN Role in command, now facing different issue than previous
CODE:
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user --user-name $customer_name --home-directory script-test/power-archive-ireland/$customer_name/ --server-id s-aaabbbccc --ssh-public-key-body "$customer_name_pub_value" --tags 'Key=Product,Value="demo",Key=Environment,Value=dev,Key=Contact,Value="dev.user#domain.com",Key=Service,Value="sftp"' --role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role"
ERROR
An error occurred (ValidationException) when calling the CreateUser operation: 1 validation error detected: Value 'script-test/power-archive-ireland/demo/' at 'homeDirectory' failed to satisfy constraint: Member must satisfy regular expression pattern: ^$|/.*
Yes, for the final bug, you should feed it as a list of objects:
--tags [{Key="Product", Value="demo"}, {Key="Environment", Value="dev"}, {Key="Contact", Value="dev.user#domain.com"}, {Key="Service", Value="sftp"
You may need to put "Key" and "Value" in quotes or even perhaps have to try key:value pairs (i.e. {"Product": "demo"}), but this should be the general syntax.
Below is the final working CLI command:
Changes
Added ROLE ARN (Thanks #user1394 for the suggestion)
Biggest issue resolved by placing / before --home-directory option (bad AWS documentation (https://docs.aws.amazon.com/cli/latest/reference/transfer/create-user.html) and their out-dated RegEx ^$|/.*)
Transform the broken CLI into JSON based CLI to fix the final bug (not all the tags were able to attach in old command)
#!/bin/bash
customer_name='demo'
customer_name_pub_value=$(cat /home/developer/naman/dir/$customer_name.pub)
aws transfer create-user \
--user-name $customer_name \
--server-id s-aaabbbccc \
--role "arn:aws:iam::8XXXXXXXXX2:role/customer-sftp-role" \
--ssh-public-key-body "$customer_name_pub_value" \
--home-directory /script-test/power-archive-ireland/$customer_name \
--tags '[
{"Key": "Product", "Value": "demo"},
{"Key": "Environment", "Value": "dev"},
{"Key": "Contact", "Value": "dev.user#domain.com"},
{"Key": "Service", "Value": "sftp"}
]'
I am trying to use the aws cli to work with kinesis video streams.
according to the documentation:
aws kinesisvideo get-data-endpoint --stream-name mytestStream
should return the data endpoint for my stream but I get:
usage: aws [options] <command> <subcommand> [<subcommand> ...]
[parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --api-name is required
when I search for --api-name I can't seem to find any mention of this flag being needed for kinesisvideo?
From get-data-endpoint Command Reference:
get-data-endpoint
[--stream-name <value>]
[--stream-arn <value>]
--api-name <value>
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
--api-name (string)
The name of the API action for which to get an endpoint.
Possible values:
PUT_MEDIA
GET_MEDIA
LIST_FRAGMENTS
GET_MEDIA_FOR_FRAGMENT_LIST
GET_HLS_STREAMING_SESSION_URL
I have a query which I am passing via the command line:
aws cloudsearchdomain --endpoint-url http://myendpt search --search-query value --return _all_fields --cursor initial --size 100 --query-options {"defaultOperator":"or","fields":["id"],"operators":["and","escape","fuzzy","near","not","or","phrase","precedence","prefix","whitespace"]} --query-parser simple --query-parser simple --profile myname
It responds with:
Unknown options: operators:[and, escape, fuzzy, near, not, or, phrase, precedence, prefix, whitespace], fields:[id]
I assure you that id field exists in AWS Cloudsearch. I reverse engineered the query in the online cloudsearch query tester to AWS CLI.
Please help.
Update:
This problem has been resolved in the updated aws-cli/1.8.4. If you are a ubuntu/linux user like me:
please do:
sudo pip uninstall awscli
sudo pip install awscli
aws --version
The solution for my ruby implementation of the aws-sdk, ver > 2
client = Aws::CloudSearchDomain::Client.new(endpoint:'http://yoururl')
resp = client.search({
cursor:"initial",
facet:"{\"facet_name_!\":{},\"mentions\":{}}",
query:"#{place_a_value_here}",
query_options:"{\"defaultOperator\":\"or\",\"fields\":[\"yourfield\"],\"operators\":[\"and\",\"escape\",\"fuzzy\",\"near\",\"not\",\"or\",\"phrase\",\"precedence\",\"prefix\",\"whitespace\"]}",
query_parser:"simple",
return:"_all_fields",
size:1000,
highlight:"{\"text\":{}}",
})
Summarizing the Asker's solution from the comments: the issue is that you have to double-quote your json param, and then either single-quote (') or escaped-double-quote (\") the json key/values within your param.
For example, both of these are valid
--query-options "{'defaultOperator':'and','fields':['name']}"
or
--query-options "{\"defaultOperator\":\"and\",\"fields\":[\"name\"]}"