AWS Amplify did not create Lambda#Edge replica in all regions - amazon-cloudfront

I have an AWS Amplify project (with Next.js), which works fine.
Nevertheless, most of my users are close to Paris (region eu-west-3), but Cloudfront and Lambda#Edge did not deploy any replicas in this region, but in many other regions, like
London (eu-west-3) which handles most of my users instead of Paris.
https://eu-west-3.console.aws.amazon.com/lambda/home?region=eu-west-3#/replicas
"There is no data to display."
https://eu-west-2.console.aws.amazon.com/lambda/home?region=eu-west-2#/replicas
we see all functions created by Amplify with description:
"Replica created by Lambda#Edge."
How can I force Cloudfront and Lambda#Edge to deploy a replica in eu-west-3 ?
Ideally via AWS Amplify.

I have this same problem (also with Next.js) and after some searching I think the issue is due to the CloudFront Architecture.
Having a CloudFront Edge Locations does not mean it can execute Lambda#Edge functions. Those are only executed at "Regional Edge Caches". The list of locations is here.
So in your situation the closest Regional Edge Caches are: Dublin, Ireland; Frankfurt, Germany; London, England
My users are in South Africa which makes this very interesting because those are the closest to me as well. It means that I should not have put my S3 bucket in af-south-1 because the Lambda#Edge will execute in Ireland anyway - the network round trip will from SA -> CF Edge Location -> Ireland Lambda#Edge -> S3 Bucket in SA -> Ireland -> South Africa
That is a horrible user experience....

Related

CosmosDB: Will call go to write replica if provided read replica region doesn't exist?

I have a CosmosDB instance on Azure, with 1 write replica and multiple read replicas. Normally we call SetCurrentLocation to make calls to read replica. My understanding is that this automatically create PreferredLocations for us. But not sure how the preferredlocations work.
Now let's say the location passed to the SetCurrentLocation method is improper. That is, there's no replica in that single location we passed, but the location is a valid azure region. In that case, will the call go to the write replica, or a closer by read replica?
SetCurrentLocation will order Azure regions based on geographical distance between the indicated region and them, and the SDK client will then take this ordered list and map it with your account available regions. So it ends up being your account available regions ordered by distance to the region you indicated on SetCurrentLocation.
For an account with a single write region, all write operations always go to that region, the Preferred Locations affect read operations. More information at: https://learn.microsoft.com/azure/cosmos-db/troubleshoot-sdk-availability
Further adding to Matias's answer, from https://learn.microsoft.com/en-us/azure/cosmos-db/sql/troubleshoot-sdk-availability:
Primary region refers to the first region in the Azure Cosmos account region list. If the values specified as regional preference do not match with any existing Azure regions, they will be ignored. If they match an existing region but the account is not replicated to it, then the client will connect to the next preferred region that matches or to the primary region.
So if the specified location is bad, or there's no read replica there, the client will try connect to the next location, where eventually the primary region (in this case, the singular write replica) is used.

Reuse the same s3 bucket between stages serverless

Our clients are already registered on our development environment and the management is asking for us to create the production environment without loosing any of the already registered user data.
We are trying to deploy the production environment on ap-southeast-2 and our development environment is already on eu-west-1.
I have made the necessary changes for the deployment to happen on these two regions but the problem is that we are creating cognito and s3 buckets using cloudformation template.
We want to use the same s3 buckets and cognito between these two regions but the problem is when I'm deploying to ap-southeast-2 (production) the stack creation fails because s3 bucket already exists.
Is it possible to reuse the same s3 bucket and cognito between regions and stages? I want the serverless framework to check if these resources exists at the region I choose (in this case eu-west-1). We can't create new buckets because we are at the 100 buckets limit!
Here is the code in how we are creating the s3 buckets. We are using serverless framework with nodejs.
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
I want the serverless framework to check if these resources exists at the region I choose
This is not how Infrastructure as a Code (IaC) works. CloudFormation nor terraform for that matter have any build in tools to "check" if a resource exists or not. The IaC perspective is - if its in a template than only the given template/stack can manage that. There is nothing in between, like it may exist or not.
Having said that, there are ways to re-architect and go around that. The most common ways are:
Since the bucket is common resource, it should be deployed separately from the rest of your stacks, and its name should be passed as an input to the dependant stacks.
Develop a custom resource in the form of a lambda function. The function would use AWS SDK to check for the existence of your buckets, and return that info to your stack for further use.

Is it possible to create an RDS instance in different Region using Terraform (using snapshot of original DB)

I have my production site in us-east-1 region whereas the DR site in us-east-2 region.
We are using terraform to configure the environment but now stuck at the DB part.
We want to copy the snapshots of our DB from us-east-1 region to the us-east-2 region using terraform.
We want to create RDS instance using this copied snapshot in us-east-2 region.
Ultimate Goal --> To create a database in us-east-2 region using a snapshot which is available in us-east-1 region, and all this should be done using terraform.
I have not used the following one to cross-region copy but you could use the following.
aws_backup_plan link1 This is to setup backup and used to cross-region copy
aws_db_instance link2 This can be used to create the RDS instance.

The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'eu-central-1'

using Node.JS with the following config file
{
"accessKeyId" :"XXX",
"secretAccessKey" :"XXXX",
"region": "eu-central-1",
"signatureVersion": "v4"
}
I still receive this error message as if the aws sdk tries to access a us-east-1 region .
Any idea ?
According to AWS, there are three situations this can happen.
When you are creating a bucket with a name that this already being used as a bucket name in your AWS account or in any other AWS account
(Please note that S3 bucket names are globally unique).
When you are doing an operation on your S3 bucket and you have set the Region variable (either when configuring the SDK or while using
environment variables etc) to a region other than the one in which the
bucket is actually present.
You have recently deleted a S3 bucket in a particular region (say us-east-1) and you are trying to create a bucket (with the same name
as the the bucket that was deleted) in another region right after
deleting the bucket.
For point 3, give up to two days and retry.
if a bucket which is present in a certain region say (us-east-1) is
deleted, you can always create a bucket with the same name in another
region. There is no such restriction in S3 that states you cannot do
this. However, you will be able to do this only after you allow some
time after deleting the bucket. This is because S3 buckets follow the
Eventual Consistency model in the case of DELETE operation.
It means that after you delete a bucket, it takes a few hours,
generally up-to 24 to 48 hours for the DELETE operation to be
replicated across all our data centres. Once this change has
propagated you can go ahead and create the bucket again in the desired
region.

Can Windows Azure roles detect which datacenter the roles are in?

Our Windows Azure roles need to know programmatically which datacenter the roles are in.
I know a REST API ( http://msdn.microsoft.com/en-us/library/ee460806.aspx) return location property.
And if I deploy to, let’s say, West US, the REST API returns West US as location property.
However, if I deploy to Anywhere US, the property is Anywhere US.
I would like to know which datacenter our roles are in even if location property is Anywhere US.
The reason why I ask is our roles need to download some data from Windows Azure storage.
The cost of downloading data from the same datacenter is free but the that of downloading data from the different datacenter is not free.
It is a great question and I have talked about it couple of time with different partners. Based on my understanding when you create your service and provide where you want this service to location ("South Central US" or "Anywhere US"), this information is stored at the RDFE server specific to your service in your Azure subscription. So when you choose a fixed location such as "South Central US" or "North Central US" the service actually deployed to that location and exact location is stored however when you choose "Anywhere US", the selection to choose "South Center" or "North Central" is done internally however the selected location never updated to RDFE server related to your service.
That's why whenever you try to get location info related to your service in your subscription over SM API or Powershell or any other way (even on Azure Portal it is listed as "Anywhere US" because the information is coming from same storage), you will get exactly what you have selected at first place during service creation.
#Sandrino idea will work as long as you can validate your service location IP address from a given IP address range published by Windows Azure team.
Two days ago Microsoft published an XML file containing a list of IP ranges per region. Based on this file and the IP address of your role you can find in which datacenter the role has been deployed.
<?xml version="1.0" encoding="UTF-8"?>
<regions>
...
<region name="USA">
<subregion name="South Central US">
<network>65.55.80.0/20</network>
<network>65.54.48.0/21</network>
...
</subregion>
<subregion name="North Central US">
<network>207.46.192.0/20</network>
<network>65.52.0.0/19</network>
...
</subregion>
</region>
</regions>
Note: It looks like the new datacenters (West and East US) are not covered by this XML file, which might make it useless in your case.
I had some services deployed in Anywhere US and in order to find out which data centre it was deployed in I had to log a support call with Microsoft. They were helpful and got me the information but even they had to go away and look it up. As a result of this I would never use and the the "Anywhere ..." locations. Knowing which data centre your services are running in is very important for knowing where to deploy things like SQL Azure and service bus which don't support affinity groups and don't have an Anywhere option.

Resources