aws boto3 python - get all resources running - python-3.x

i am trying to connect to an aws region and want to find all resources running in that. the resources can be any thing from the list of services provided by aws (ec2, rds...). Now that i am writing python code to create a client for every service and getting the list. if i have to write the code for all services, it will be huge. please suggest me a best approach to grab the details with python. i cant use the aws config or resource manager as these are not whitelisted yet.

Related

Is there anything similar to AWS cloudwatch Integration in GCP

I want to get cloud metrics of a service deployed in GCP for my python project but I can't find anything like boto3 library for AWS. Is there any google API to fetch metrics like CPU utilization, memory used etc.
There is Google Cloud SDK for Python. That is the equivalent of boto3. The difference is, as you can see from the link, instead of 1 single library, Google splits it into multiple libraries based on service. For example, for logging, the library is google-cloud-logging.
And of course they have APIs. This article from their docs explained the cloud logging API and even mentioned which Cloud Client Library to use.

Using Terraform to create RDS (Amazon Aurora PostgreSQL) with features (IAM auth, rotating master pw)

I have been given a task at work to create an RDS cluster module using Terraform that will allow for consumers to spin up their own clusters/dbs etc. This is all fairly straight forward but it is the second lot of requirements that has me pulling my hair out. The DBAs want to know how to do the following:
Store and rotate the master password in secrets manager.
Create additional dbs, users etc via automation (nothing is to be clickops'd).
Utilise IAM authentication so that users do not have to be created/auth'd.
I have looked at a number of different ways of doing this and as i'm fairly new to this, nothing seems to stick out as "the best solution". Would anyone be able to give me a rundown of how they may have approached a similar task? Did you store and rotate password using a lambda function or did you assign the master user to an IAM role? Are you using the TF postgres provider to create roles or did you write your own code to automate?
I really appreciate any guidance.
Thanks heaps
The problem described is rather generic, but in my view you could keep almost everything under direct controll of terraform.
Store and rotate the master password in secrets manager.
Secret manager is the way to go. However, the password rotation will be an issue. When you enable rotation in AWS console, AWS magically provisions a lambda for you. If you don't use console, command line steps are a bit more involving as they require the use of aws serverless repo (SAR). Sadly, official support for SAR is not yet avaiable in terraform. Thus you would have to use local-exec provisioner to run aws cli to create rotation lambda as in the linked documentation using SAR.
Create additional dbs, users etc via automation (nothing is to be clickops'd).
As you already pointed out, the TF PostgreSQL Provider would the first thing to consider.
Utilize IAM authentication so that users do not have to be created/auth'd.
This can be enable using iam_database_authentication_enabled. But you should know that there are some limitations when using IAM auth. Most notably, only PostgreSQL versions 9.6.9 and 10.4 or higher are supported and your number of connections per second my suffer.
A follow up on point 1 for anyone in the future who wants to do a similar thing.
I ended up using a cloudformation_stack terraform resource to create the secret attachment and secret rotation - passing them parameter values from my terraform resources.
Works perfectly and easily switched out when/if terraform introduce these resources.

Terraform Import All Cloud Infrastructure Services to Statefile

I 'm using many services in Alibaba Cloud like Container Service, VPC, RDS, DNS, OSS and many more.
Instead of importing 1 by 1 of Alibaba Cloud Product Services that used that would take a long time for that.
Is there any elegant and fast way to importing all of the cloud infrastructure to a statefile ?
Yes, you can make a resource list and then run terraform but make sure you can have

Terraform circular dependency between services

I'm just starting learning terraform and am trying to setup an elastic search cluster with an API gateway in front of it. I've successfully built the service such that the Elastic Search cluster is built and the output endpoint is passed into the API gateway for the integration request via output variables.
In my initial trials I was using wide open access in the aws_elasticsearch_domain.my_name.access_policies for testing my template code. This worked fine for testing purposes but for real world use I want to use the ARN of the API gateway in the aws_elasticsearch_domain.my_name.access_policies. This is problematic seemingly because the aws_api_gateway_integration.my_name.uri needs the aws_elasticsearch_domain.my_name.endpoint to setup the but aws_elasticsearch_domain.my_name.access_policies needs the aws_api_gateway_deployment.my_name.execution_arn.
I'm guessing this is fairly common but I can't figure out how to achieve this through outputs or variables.
Thanks for any help.
One way to get around this is by creating a route53 endpoint for your elastic search and hard coding the route53 endpoint of elastic search in API gateway there by breaking the circular dependency.

Is there any alternative for WebJobs in AWS (like in Azure)?

I need to implement scheduled tasks, so that every X time the job will start running and will start an .exe file.
I did this those tasks in Azure very easily, but can't find something appropriate in Amazon Web Services.
Can you tell me if there is something similar in AWS for Azure WebJobs?
The most similar piece of AWS services that fits your needs is AWS Lambda. But as your comment states you do not want to code.
When comparing AWS to other cloud services it pops out that AWS focus on a very primitive services that can be connect and build complex systems. This is an advantage as one can tailor the cloud to its needs. However it can be more complex to setup when compared to a PaaS.

Resources