Accessing CloudFront metrics from another account - amazon-cloudfront

I have 2 AWS accounts:
QA account,
monitoring account.
First one has a CloudFront distribution. I can see its metrics in CloudWatch in the same account.
In the same CloudWatch, I have enabled cross-account cross-region data sharing with the second - monitoring account.
Unfortunately, from the CloudWatch in the monitoring account, I cannot see any CloudFront metrics. Tried in the us-east-1/N.Virginia region, where CF supposed to expose its metrics, but I cannot see them. At the end, I would like to access CloudFront metrics from the first account while being in Ohio region in the monitoring account.
Could you please guide me, how to access those CloudFront metrics from a perspective of the second account?
Thanks in advance!
Using AWS Management Console for the second account, I tried to explore all CloudWatch metrics from different regions looking for CloudFront data, including N.Virginia, but could not find anything.

OK, fixed. Now, while adding widgets to CW dashboard in the monitoring account I can see 2 drop-down boxes with account and region. When I select QA account and N.Virginia region, I can see all required CloudFront metrics to choose from.
I am not 100% sure, why it helped, but that's what I did: I have re-configured CloudWatch->Settings->View cross-account cross-region again, but this time selected Custom account selector instead of Account Id Input.
Initially, I first enabled viewing data from the monitoring account and then enabled sharing data from the QA account. Maybe it has to be done in the reverse order to work from the git go.
Anyway, I hope it will help someone struggling like me.

Related

How to get a summary of ressources/cost used per IAM users in AWS?

I would like to have all cost and ressources used of each IAM users.
Unfortunately, i can have only the cost of my master account. I know that i can create a organization and set OU and users AWS account to have a detail and record each events, but the ressources used by 'users' are used only for my application, i don't need to have real account, and i can't automatises all deployements if i must set password and credentials manually .
One solutions also is to create CloudTrail and Cloudwatch to record each events services but i found this 'too heavy' and i will need to calculate myself the cost because it only get the datas which are used.
I would like to know if they are exists others systemes to do that in preference with boto3.
Thank you for your responses.
Have a nice day/night.
Costs are not easily associated back to "users".
AWS resources are associated with an AWS Account. When a user creates a resource (eg an Amazon EC2 instance), IAM will confirm that they have permission to launch the resource, but the resource itself is not associated with the user who created it.
You would need to add tags to resources to have more fine-grained association of costs to people/projects/departments. These tags can be used in billing reports to provide cost breakdowns.

How to monitor IOPS for an Azure Storage Account

Having used Azure for some time now, I'm well aware of the default 20,000 IOPS limit of an Azure Storage Account. What I've yet to find however is up to date documentation on how to monitor an account's IOPS in order to determine whether or not it's being throttled. This is important when debugging performance issues for applications, VMs, and ASR replication - to name but three possible uses.
If anyone knows the correct way to keep track of an account's total IOPS and/or whether it's being throttled at any point in time, I'd appreciate it - if there's a simple solution for monitoring this over time, all the better, otherwise if all that exists is an API/PowerShell cmdlet, I guess I'll have to write something to save the data periodically over time.
You can monitor your storage account for throttling using Azure Monitor | Metrics. There are 3 metrics relevant to your question, which are
AnonymousThrottlingError
SASThrottlingError
ThrottlingError
These metrics exist for each of the 4 storage account abstractions (blob, file, table, queue). If you're unsure how your storage account is being used then monitor these metrics for all 4 services. Things like ASR, Backup and VM's are going to be using the blob service.
To configure this, go to the Azure Monitor | Metrics blade in the portal and select the storage account(s) you want to monitor. Then check off the metrics you're interested in. The image blow shows the chart with these 3 metrics configured for the blob service.
You can also configure an alert based on these metrics to alert you when any of these throttling events occur.
As for measuring the IOPS for the storage account, you could monitor the Transactions metric for the storage account. This is not really measuring the IOPS, but it does give you some visibility into the number of transactions (which sort of relates to IOPS) across the storage account. You can configure this from the storage account blade and clicking Metrics in the Monitoring section as shown below.

Does AWS cloud provides an option to cap the billing amount?

We had a bill shock scare in our corporate account when someone got access to the secure keys and started a lot of m3.large spot instances (50+) on the aws account.
The servers ran overnight before it was found and the bill went over $7000 for the day.
We have several security practices set up on the account after the incident including
key rotation
password minimum length
password expiry
Billing alerts
Cloudwatch
Git precommit hooks to look for AWS keys
I am yet to find a way to cap the bill amount to a desired top threshold.
Does AWS provide a method of setting a cap on the bill(daily/monthly) ? Is there any best practices on this front which can be added to the measures pointed out above to prevent unauthorized use ?
Amazon does not have a mechanism to "take action" in cases where bills skyrocket. You can do what you've already done:
Setup billing alerts to monitor for a skyrocketing bill
Setup good security practices to ensure that people cannot mess with your AWS account
But also, you can:
Setup internal company policies so that employees don't accidentally cause unnecessary charges
Ensure you're using IAM roles and policies appropriately so that no one can do the wrong thing
There's a good reason why AWS won't do anything active: what exactly would you expect them to do? Doing anything that isn't inline with your business practices could totally damage your company.
For example, you have an autoscaling group managing a small fleet of EC2 instances. One day, your company gets some unexpected good press and your website activity goes through the roof, launching new EC2 instances to meet the demand, and blasts past your billing alert. If AWS were to terminate or stop EC2 instances to prevent your bill from going nuts, then your customers wouldn't be able to access your website. This could cause damage to your company reputation, or worse.
If you want to take action, you can setup a trigger on the billing alert and handle it yourself according to your business needs. That's how AWS is built: it gives you the tools; you need to use those tools in a way that best suit your business.
You can definitely setup Billing Alerts to receive a notification when this kind of thing happens:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/monitor-charges.html
Also take a look at:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/checklistforunwantedcharges.html
Although AWS does not support a cap on billing, it does support caps on services including a cap on the number of EC2 instances - see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html. By default new accounts are caped at 20 EC2 instances, but this limit can be changed.

Is there any way to get Azure status update only for some services and regions I am using?

Is there any way to get Azure status update only for some services and regions I am using? For example, I am using Cloud Services in West US. When this service in West US is down, I want to get an alert for it. I don't care about other services and other regions.
If you set up alert notifications for your application, you'll get notified when any of the underlying services you're using are not functioning properly. An alert will ensure that your service is available and working.
https://azure.microsoft.com/en-us/documentation/articles/insights-receive-alert-notifications/
If you get an alert about a service issue, that's when I would first take a look at the Azure status dashboard, and then take a look at your application logs to troubleshoot.
Another trick is to create simple URL's in your application that do a quick service test. For example, let's say you're using blob storage in the west datacenter. You could set up a page that does a test write/read to ensure that service is working. This will give you a 100% accurate indication if there is a problem. Since the cloud is highly distributed, and services statuses don't update immediately, I find this method highly preferable.
You would then point your alert monitoring at URL's like this:
http://yourapp.com/
http://yourapp.com/blobtest
http://yourapp.com/redistest
The Azure Status website has the information your need for all Azure regions.
https://azure.microsoft.com/en-us/status/

Windows Azure - how do you change the region of a Table Storage account?

I've created a Hosted Service that talks to a Storage Account in Azure. Both have their regions set to Anywhere US but looking at the bills for the last couple of months I've found that I'm being charged for communication between the two as one is in North-Central US and the other South-Central US.
Am I correct in thinking there would be no charge if they were both hosted in the same sub-region?
If so, is it possible to move one of them and how do I go about doing it? I can't see anywhere in the Management Portal that allows me to do this.
Thanks in advance.
Adding to what astaykov said: My advice is to always select a specific region, even if you don't use affinity groups. You'll now be assured that your storage and services are in the same data center and you won't incur outbound bandwidth charges.
There isn't a way to move a storage account; you'll need to either transfer your data (and incur bandwidth costs), or re-deploy your hosted service to the region currently hosting your data (no bandwidth costs). To minimize downtime if your site is live, you can push your new hosted service up (to a new .cloudapp.net name), then change your DNS information to point to the new hosted service.
EDIT 5/23/2012 - If you re-visit the portal and create a new storage account or hosted service, you'll notice that the Anywhere options are no longer available. This doesn't impact existing accounts (although they'll now be shown at their current subregion).
In order to avoid such charges the best guideline is to use Affinity Groups. You define affinity group once, and then choose it when creating new storage account or hosted service. You can still have the Affinity Group in "Anywhere US", but as long as both the storage account and the hosted service are in the same affinity group, they will be placed in one DataCenter.
As for moving account from one region to another - I don't think it is possible. You might have to create a new account and migrate the data if required. You can use some 3rd party tool as Cerebrata's Cloud Storage Studio to first export your data and then import it into the new account.
Don't forget - use affinity groups! This is the way to make 100% sure there will no be traffic charges between Compute, Storage, SQL Azure.

Resources