How do I get an available EC2 volume using Python? - python-3.x

I am trying to create a Python script to monitor my multiple AWS EC2 instances and I am using the Boto3 library.
I have gotten stuck when it comes to finding an available volume, as there is the method which returns the volume Id and total size, as described in: Boto3 get EC2 instance's volume.

There is no direct way of checking the available volume in EBS. AWS provides a way to send custom metrics to Cloudwatch using the Cloudwatch Unified Agent.
Run the CloudWatch unified agent in all your EC2 instances.
Send the custom metric (disk utilization) to Cloudwatch, using the agent.
Use boto3 to check CloudWatch metric.
Check this link for more information

Related

aws boto3 python - get all resources running

i am trying to connect to an aws region and want to find all resources running in that. the resources can be any thing from the list of services provided by aws (ec2, rds...). Now that i am writing python code to create a client for every service and getting the list. if i have to write the code for all services, it will be huge. please suggest me a best approach to grab the details with python. i cant use the aws config or resource manager as these are not whitelisted yet.

Payload logging on AWS SageMaker

How can I enable monitoring and log payload for a model deployed on AWS SageMaker? I am using a classification model and will be outputting predicted class and confidence. How should i configure this in UI or sdk?
The configuration process in UI:
Click second tab on the left and select AWS SageMaker
Provide the access key info and the region of the AWS SageMaker
Select the deployment(s) you want to monitor
Use the code snippet provided in a Watson Studio notebook to set up the payload schema.
Configure the fairness and accuracy monitors in the UI. This step should be the same as configuring deployments from any other environments (e.g. WML, SPSS)
SageMaker sends all logs produced by your model container to your CloudWatch, under log group named /aws/sagemaker/Endpoints/[EndpointName]. With that said, you could simply configure your model to log inference payloads and outputs, and they will show up in CloudWatch logs.

How to calculate how much data is being transferred over cloud each month in AWS

We are using AWS for our infra requirement, and for billing and costing purpose we need to know the exact amount of data transferred to our EC2 instances for a particular client. Is there any such utility available in AWS or how should I approach this problem.
Our Architecture is simple we have a api server which is a Node.js® server on one of the ec2 instance, this talks to the db server which is a MongoDB® on another ec2, apart from this we also have a web application server which runs angular web application in Node.js® again.
Currently we don't use ELB and we Identified the client by there login informations i.e the organisation id in the JWT Token.
Given your current architecture, you will need to create some form of Node middleware that extracts the client ID and content-length from the request (and/or response) and writes them to persistent storage. Within the AWS ecosystem, you could write to DynamoDB, or Kinesis, or even SQS. Outside the AWS ecosystem you could write to a relational DB, or perhaps the console log with some form of log agent to move the information to persistent store.
However, capturing the data here has a few issues:
Other than logging to the console, it adds time to each request.
If logging to the console, there will be a time delay between the actual request and the time that the log is shipped to persistent storage. If the machine crashes in that interval you've lost data.
When using AWS services you must be prepared for rate limiting (this is one area where SQS is better than Kinesis or DynamoDB).
Regardless of the approach you use, you will have to write additional code to process the logs.
A better approach, IMO, would be to add the client ID to the URL and an ELB for front-end load distribution. Then turn on request logging and do after-the-fact analysis of the logs using AWS Athena or some other tool.
If you run these EC2 instances in VPC, you can use VPC Flow Logs to get insight into how much data each of the instances transfers.

Custom Cloudwatch Metrics

I am using AWS RDS SQL server and I need to do enhanced level monitoring via Cloudwatch. By default there are some basic monitoring available but I want use custom metrics as well.
In my scenario I need to create an alarm whenever we get more number of deadlock in SQL server. We are able to fetch the details of deadlock via script and I need to prepare custom metrics for the same.
Can any one help on this or kindly suggest any alternate solution?

best practice to create cloudwatch alarm to monitor amazon rds

I know amazon provides awesome metrics for monitoring rds box, but my question is, if I only want to monitor whether it's reachable or not, like zabbix ping, what metric shall i use when creating an alarm?
On the RDS console you can create event subscriptions, select events (like availability and failure) and assign notification groups.
I didn't find an option on cloudwatch to do this.

Resources