I'm a new to aws-cli.
I want to get instances information from terminal, but I can't stop it with brackets instance name.
here is command,
aws ec2 describe-instances --filters 'Name=tag:Name,Values=[hoge]*'
instance name like,
[hoge]instance-1
Can someone fix it?
Brackets are special characters and needs to be escaped; see the "To add tags with special characters" section on http://docs.aws.amazon.com/cli/latest/reference/ec2/create-tags.html
Example:
aws ec2 describe-instances --filters 'Name=tag:Name,Values="[hoge]*"'
By the way I would also have given myself a thought on altering my naming convention if I was using brackets in tag values; those characters work when tagging EC2 instances, but not necessarily when tagging instances of other AWS services; see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions
Related
Because of Glacier Deep's expensive support for small objects, I am writing an archiver. It would be most helpful to me to be able to ask boto3 to give me a list of objects in the bucket which are not already in the desired storage class. Thanks to this answer, I know I can do this in a shell:
aws s3api list-objects --bucket $BUCKETNAME --query 'Contents[?StorageClass!=`DEEP_ARCHIVE`]'
Is there a way to pass that query parameter into boto3? I haven't dug into the source yet, but I thought it was essentially a wrapper on the command line tools- but I can't find docs or examples anywhere using this technique.
Is there a way to pass that query parameter into boto3?
Sadly, you can't do this, as --query option is specific to AWS CLI. But boto3 is Python AWS SDK, so you very easily post-process its outputs to obtain the same results as from CLI.
The --query option is based on jmespath. So if you really want to use jmespath in your python, you can use jmespath package .
Query S3 Inventory size column with Athena.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-inventory-athena-query.html
Is there a way with the AWS CLI to tell that you are running your lambda locally programmatically? I'm trying to avoid adding extra data in the request.
I have some functionality that I don't want kicked off when I'm running locally, but I do once its up in the AWS cloud.
Thanks
A first option is to use one of the environment variables that are available when a Lambda function is executed. The AWS_EXECUTION_ENV - like you stated in your comment - can be a good pick for this.
A second option is using the context object which is passed in as a second parameter to your handler function. This contains very specific information about the request, such as the awsRequestId which could also help you in determining whether or not your code is running on the cloud or locally.
I'm working on a service that I want to use to monitor tags and enforce tagging policies.
One planned feature is to detect resources that are tagged with a value that is not allowed for the respective key.
I can already list the ARNs of resources that have a certain tag-key and I am now looking to filter this list of resources according to invalid values. To do that I want to query a list of each resources tags using its ARN and then filter by those that have invalid values in their tags.
I have
[{
"ResourceArn":"arn:aws:ec2:eu-central-1:123:xyz",
"ResourceType":"AWS::Service::Something
}, ...]
and I want to do something like
queryTags("arn:aws:ec2:eu-central-1:123:xyz")
to get the tags of the specified resource.
I'm using nodejs, but I'm happy to use a solution based on the AWS cli or anything else that can be used in a script.
You can use that through awscli.
For example, EC2 has the command describe-tags for listing the tags of resources and I think other resources also have command like this. It also has options that meet your need.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html
I can't go into details unfortunately, but I'll try to be as thorough as possible. My company is using AWS Beanstalk to deploy one of our node services. We have an environment property through the AWS configuration dashborad, the key ENV_NAME pointing to the value in this case one of our domains.
According to the documentation, and another resource I found once you plug your variables in you should be able to access it through process.env.ENV_NAME. However, nothing is coming out. The names are correct, and even process.env is logging out an empty Object.
The documentation seems straight forward enough, and the other guide as well. Is anyone aware of any extra steps between setting the key value pair in the dashboard, and console logging out the value once the application is running in the browser?
Turns out I'm an idiot. We were referencing the environment variable in the JavaScript that was being sent to the client. So, we were never looking for the env variable until after it was off the server. We've added a new route to fetch this in response.
I configured a stack for a NodeJS application server using Amazon OpsWorks.
I need to access some environment variables which define Google API credentials. How can I achieve this ? I already spent more two days on this.
I ended up by the following chef recipe :
magic_shell_environment "GOOGLE_CLIENT_ID" do
owner 'root'
group 'root'
value "********"
mode '0600'
end
I use the root account because it seems the NodeJS is run under that account. If I remove the owner and group attributes, I can read those environment variables fine (as the default ubuntu user). However, when I ssh to my instance and type echo $GOOGLE_CLIENT_ID as root, I get an empty string.
Also, where is logged the output of console.xxxx(...) ?
OpsWorks now lets you specify up to 20 custom environment variables in the app settings page. In the case of a node.js app these will be available in the process.env object.
This should be fairly easy to do. Just add the following line to the top of your recipe.
ENV['GOOGLE_CLIENT_ID']="YOUR_CLIENT_ID"
Use the OpsWorksEnvy cookbook. It hooks nicely into the default nodejs cookbooks and lets you set the environment variables in your stack attributes.