Using the SSM send_command in Boto3 - python-3.x

I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.

As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"

Related

terraform cdk equivalent for grantReadWriteData (available in aws-cdk)

I am new to Terraform and also CDKTF. I have worked with “regular” AWS CDK.
In AWS CDK you have methods like grantReadWriteData ( IAM principal example ). E.g. if you have a dynamodb table where you want to give a Lambda function readwrite permissions you can call something like this:
table.grantReadWriteData(postFunction);
Does anything like this exists on CDK TF or do we have to write those policy statements our selves and add them to a lambda function role?
i cant find much documentation in terraform for this
There isn't anything like that in terms of a fluent interface for libraries generated from a provider or module but I would definitely recommend looking into iam-floyd for a similar type of fluent interface.
Like this function table.grantReadWriteData(postFunction);
using AWS CDK L2 Construct Library method to help you generate iam policy and attach policy at lamdba Function execute role.
The L2 construct library of CDKTF is not yet widespread for now.
So you need to define permission like this way.
And if you want to use CDKTF to deploy/manage AWS Resource, maybe you can take a look https://www.terraform.io/cdktf/create-and-deploy/aws-adapter.

Is there a AWS CDK code available to enable WAF logging for Kinesis firehose delivery stream?

Anyone have Python CDK code to enable Amazon Kinesis Data Firehose delivery stream Logging in WAF? Any language CDK code is fine for my reference as I didn't find any proper syntax or examples to enable in official python CDK/api documentation nor in any blog.
From the existing documentation (as of CDK version 1.101 and by extension Cloudformation) there seems to be no way of doing this out of the box.
But there is API call which can be utilized with boto3 for example: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/wafv2.html#WAFV2.Client.put_logging_configuration
What you need to have in order to invoke the call:
ResourceArn of the web ACL
List of Kinesis Data Firehose ARN(s) which should receive the logs
This means that you can try using Custom Resource and implement this behavior. Given you have created Firehose and web ACL in the stack previously, use this to create Custom Resource:
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.custom_resources/README.html
crd_provider = custom_resources.Provider(
self,
"Custom resource provider",
on_event_handler=on_event_handler,
log_retention=aws_logs.RetentionDays.ONE_DAY
)
custom_resource = core.CustomResource(
self,
"WAF logging configurator",
service_token=crd_provider.service_token,
properties={
"ResourceArn": waf_rule.attr_arn,
"FirehoseARN": firehose.attr_arn
}
)
on_event_handler in this case is a lambda function which you need to implement.
It should be possible to simplify this further by using AwsSdkCall:
on_event_handler = AwsSdkCall(
action='PutLoggingConfiguration',
service='waf',
parameters={
'ResourceArn': waf_rule.attr_arn,
'LogDestinationConfigs': [
firehose.attr_arn,
]
)
This way you don't need to write your own lambda. But your use case might change and you might want to add some extra functionality to this logging configurator, so I'm showing both approaches.
Disclaimer: I haven't tested this exact code, rather this is an excerpt of similar code written by me to solve similar problem of circumventing the gap in Cloudformation coverage.
I don't have python CDK example, but I had it working in Typescript using CfnDeliverySteam and CfnLoggingConfiguration. I would imagine you can find matching class in python CDK.

How to delete GKE (Google Kubernetes Engine) cluster using python?

I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API delete_cluster() from the google-cloud-container python library to delete the GKE cluster.
https://googleapis.dev/python/container/latest/index.html
But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?
Or else If there is any other way to delete the GKE cluster in python?
Thanks in advance.
First you'd need to configure the Python Client for Google Kubernetes Engine as explained on this section of the link you shared. Basically, set up a virtual environment and install the library with pip install google-cloud-container.
If you are running the script within an environment such as the Cloud Shell with an user that has enough access to manage the GKE resources (with at least the Kubernetes Engine Cluster Admin permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:
from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
Notice that as per the delete_cluster method documentation you only need to pass the name parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the credentials parameter to get the client correctly authenticated in a similar fashion to:
...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
Where the credentials variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.
Finally notice that the response variable that is returned by the delete_cluster method is of the Operations class which can serve to monitor a long running operation in a similar fashion as to how it is explained here with the self_link attribute corresponding to the long running operation.
After running the script you could use a curl command in a similar fashion to:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
by checking the status field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the requests library or any equivalent to automate this checking procedure of the long running operation within your script.
This page contains an example for the command you are trying to perform.
To give some more details that are required for the command to succeed -
Your environment needs to contain environment variables, this page contains instructions for how to do that.
Once your environment is successfully authenticated we can run the delete cluster command like so -
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=projects/<project>/locations/<location>/clusters/<cluster>)

How to use query option with aws eks get-token

I am trying to get token data only from aws eks get-token without any additional tools, like jq.
aws eks get-token --cluster-name myclustername --query status.token
still returns the complete response.
What is wrong with my --query? Or does this option not work with this subcommand?
aws --version
aws-cli/1.16.218 Python/3.6.8 Linux/4.15.0-1047-aws botocore/1.12.208
Thank you!
At this stage it looks like it is not possible to use query parameter even with the latest version of awscli 1.16.230. Jq is probably the best tool to use to get the output to Json format and parse it.
However, you may consider using grep and/or sed to output the token value.
I personally think it might be just a bug and it will get fixed with later version of awscli.

AWS Lambda spin up EC2 and triggers user data script

I need to spin up an instance using Lambda on S3 trigger. Lambda has to spin up an EC2 and trigger a user data script.
I have an aws cli something like aws —region use-east-1 s3 cp s3://mybucket/test.txt /file/
Looking for python boto3 implementation.Since lambda is new to me, can someone share if its doable?
One way is Lambda runs CFT and UserData is part of CFT, but think there should be an easier way to achieve this.
Just include UserData parameter in your Boto3 function.
You should use a code like that:
ec2.create_instances(
ImageId='<ami-image-id>',
InstanceType='t1.micro',
UserData='string',
....
If you don't need to create, but just run, you should use:
ec2.client.run_instances(
...
UserData='string',
...
You can see all arguments that create_instance and run_instances support in:
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances

Resources