Multiple CIDR blocks per VPC - terraform

Here's my situation...
I'm trying to deploy an application to several customers. Each customer will have their own VPC (for security and isolation purposes). The IP addresses for the application cannot be changed. I want to use VPC Peering in order to manage all customers.
So.. On one hand I need a subnet that's identical across all VPCs. I also need a subnet in the VPC that's unique (otherwise peering falls apart).
In the AWS Console I'm able to build this scenario and it works.
Looking at a describe_vpc call there appears to be a CidrBlockAssociationSet which is a list of CIDR blocks.
Does Terraform support this? Workarounds?
Thx

Related

Subnet to subnet peering in azure

We have our application and database in different VNETs in different subscription.
Also we have different environments (Pre-Production and Production).
Currently the database for PROD and PRE-PROD is in different subnet but same VNET.
I see we can have peering at VNET level.
We want the peering between the application and database at subnet level so that PRE-PROD application should not be able to connect to PROD database and vice-versa.
From Microsoft documentation:
Azure routes traffic between all subnets within a virtual network, by default. You can create your own routes to override Azure's default routing.
https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-create-route-table-portal
You would want to look at network segmentation patterns. You can achieve basic microsegmentation by using Network Security Groups in Azure.
Based on your question, I assume you want to allow traffic only from one subnet into another subnet (over a peered network though, but that does not matter much) - and maybe even only allow one database port to be accessed from your application subnet - and lock everything else down.
This approach is described here: https://learn.microsoft.com/en-us/azure/architecture/framework/security/design-network-segmentation
You would want to create one NSG per subnet, make a rule to block all inbound traffic and then allow only traffic that is required. Note that NSG's are stateful, so you do not need to specify outbound rules for your traffic. Also make sure you apply the rules in the right order from top to bottom. The rule with the lowest id is applied first.

Azure Vnet Networking Design: Hyperscale and more than 10.0.0.0/8 hosts with Public CIDRs

I hope this belongs here. It's a cloud infra question.
I'm a designing a hyper-scale network setup in Azure where I am testing the limits of what can be done in Azure. It's not by any standards a typical use-case.
So my problem is the following. What happens if you need more than a 10.0.0.0/8 for your entire setup. Some things I am aware before asking this question.
I know this means 16777214 hosts but I am aiming for N private hosts and in turn private N IPs to be available to the system.
I'm not planning on dumping everything on a Vnet but since in Azure you cannot have overlapping cidrs if you plan to peer them. So essentially I've only got the 10.0.0.0/8 as total even if double VLSM it to proper segregate domains. Further explanation: I'm planning on using 10.0.0.0/21 Vnets with varied VLSM subnets depending on the needs.
I do want a central management layer that has access to all networks (that might be the issue). So no overlapping cidrs again if I need to peer everything together.
I come from an AWS background where even if it is hard you can peer overlapping CIDRs through the Transit Gateway with CIDR->NAT and some clever logic. No such luck in Azure from my current research (Please correct me if I'm wrong)
So that led me to ask myself. Azure Vnet (as well as well as AWS) support almost any cidr address including public and it will not route to the VNET so it's not real public ip address. What are the implications of using a public cidr for my Vnets?
The first thing I can think is that those subnets shouldn't be able to reach the actual public ip address range that they were assigned cause local network route tables take precedence. So they might only be useful for isolated from the internet vnets?
And my question is in tldr;
Should you use public ip cidr ranges for your vnet pool in Azure? Yes? No? When? I'd love to hear opinions.
Author's Comment: We still can't get rid of ipv4 problems in 2022 :joy:

Azure Subnet-to-Subnet Security Rules without Application Security Groups

I'm trying to understand the Network Security Groups and Application Security Groups. What I'm trying to achieve is I have a basic set up as below.
In my vnet, I have 2 subnets which are front-end and back-end and I have 2 NSGs that each subnet is assigned to.
Let's say I decided to allow RDP requests on my "back-end" subnet only for requests coming from the "front-end" subnet and deny any other RDP requests coming from other subnets.
I know that if I create ASGs and assign the FrontEnd VM and BackEnd VM an application security group then I can create a rule on NSG which is to allow RDP request from one ASG to the other ASG to achieve this but if you have dozens of VMs in a subnet then you wouldn't want to waste time to assign an ASG to every VM.
Is there a way to define a rule on a subnet that allows specific requests coming from other subnets?
create a rule and set the source to VirtualNetwork that will allow anyone from inside the Virtual Network (and peered ones) to send that type of traffic. If you want subnet granularity - you'd have to use subnet IP address ranges to allow\deny specific traffic patterns. You might also want to override the default rule to allow anything inside the virtual network

The address space '10.0.0.0/16' overlaps with '10.0.0.0/16' in virtual network 'ABC'

I have created two Virtual Networks in the same region but in a different resource group. But still, I'm getting a warning like The address space '10.0.0.0/16' overlaps with '10.0.0.0/16' in virtual network 'XYZ-VNET'. I referred to this link https://stackoverflow.com/a/41827643/6339597 but I still have some doubts because one of the resource groups is in production and another one is in a testing stage. Will it create any issues in the future?
How can I change address space of VNet? Because while deploying MongoDB sharding cluster it didn't ask me about address space for VNet.
but I still have some doubts because one of the resource group is in
production and another one is in testing stage.will it create any
issue in future?
As juunas said, it is only a warn. If you don't connect the two VNets, it does not create any issue in future. If you want to create a VNet-to-VNet VPN between the two VNet or peering the two VNet, it is not possible. Because they all require it:
Do not overlap networks of each VNet.
If possible, I suggest you could use different VNet address ranges. Such as like 10.1.0.0/16.
Update from comment:
You could edit the template when you deploy it.
Note: If you want to change address ranges, please ensure you modify all of the ip address in the template.

How to restore values of default VPC Security group within a given availability zone?

I am operating in US-West Availability Zone.
I was trying to solve the problem for ELB and I courageously or stupidly changed the Source IP of my default VPC security group to fix it. It did not fix the original issue but now I am in to another issue.
Now I am trying to restore the default VPC security Group setting in my Amazon Web Service account.
As per my knowledge the default VPC is very restrictive.
I don't quite remember what was the value for inbound source IPs.
The issue is that I have changed the Inbound rule's Source IP (from its original value which I do not remember) to Anywhere (0.0.0.0/0) in the default VPC Security group on AWS Console.
So how do I bring back the original default VPC security rule inbound IP setting that is applicable to my availability zone?
What is the implication of this? As a precaution I am not using default VPC Security Group on any of the EC2 instance or ELB.
Default VPC security group can not be deleted
You can make changes though
The default VPC security group simply points to its own Group-ID.
So you know how to restore the default VPC rule if you mess up with it.
You can search the informations inside AWS Forum https://forums.aws.amazon.com/forum.jspa?forumID=58
Many people open tickets similar to your problem.

Resources