Sharable "count" meta-argument between Terraform modules? - terraform

I have a case in which I have two modules, one that creates a resource off of OS1, and another using OS2. A common network configuration is used for these two resources such that they will attach to the same network upon provisioning. What I'm hoping to achieve is when specifying the IP addresses of these machines, I can have subsequent IPs for machines attaching to a particular network, for example, given
module one{
source = /path/to/OS1_module
count = 2
network_config = var.networkConfig1
}
module two{
source = /path/to/OS2_module
count = 2
network_config = var.networkConfig1
}
is there some way to share a variable between the two such that the subsequent two VMs provisioned in module two would follow those of module one? For example if I wanted to hold a variable defining the third octet of a 192.168.x/24 network, is it possible to make such a variable using count or some other method and share it between the two such that
VM1-OS1 = 192.168.1.10
VM2-OS1 = 192.168.1.11
VM3-OS2 = 192.168.1.12
VM4-OS3 = 192.168.1.13

I'm not totally sure I understand your requirement, but it sounds like you want module "two" to start its numbering at the next available address after module "one".
One way to achieve that would be to add a new input variable to this module to specify the host number and then set that variable for each instance in a way that takes into account the "stacking" of the modules, like this:
module "one" {
source = "./modules/OS1"
count = 2
network_config = var.networkConfig1
host_number = count.index
}
module "two" {
source = "./modules/OS1"
count = 2
network_config = var.networkConfig1
host_number = length(module.one) + count.index
}
This makes use of the fact that a module block with count appears in expressions as a list of objects representing all of the instances, and therefore length(module.one) will always equal the count value for that module block.
This approach does have the big caveat that if you change the count of module "one" in future then you'll also renumber all of the module "two" instances, because length(module.one) will change. Unfortunately that's a typical problem with IP address range planning, and I typically advise addressing it by centralizing your address assignments in a single location and then passing the assigned addresses into the separate modules that need them, since then in future you can maintain the IP address table in one place -- making sure not to inadvertently renumber any existing networks -- rather than having to carefully coordinate changes across several different modules.

Related

IO Mapping in Codesys

I am using a ifm AC1421 PLC. I am using a ifm AC2218 D/A module to operate the actuators i.e. proportional valves. A/D module AC2517 ifm is used to get the data from pressure sensors.
I wanted to have an idea on how is the I/O mapping done in Codesys i.e. at what address do i need to define them.
I have attached an image which shows the current assigned variables.
Say for ex. I have assigned my PV1 at %QW47 and PV2 at %QW50.
Can I not assign PV2 at %QW48 or %QW49?
If i assign them the PV2 doesn't get operated
Similar goes with the Sensors I have assigned at %IW32,33 and 34. Can i not assign them at %IW37,38 or 39?
Actuators
Sensors
AFAIK, unless you need to know the exact memory address of those inputs/variables elsewhere in your code, you shouldn't really pay attention to the address. What you should care about are the Channels. Where you should assign/map your variables to will depend on what channel your input is wired to.

Combine two sets of IPs to form a single output Terraform

I have two Terraform resources creating several EC2s each (one Linux EC2s, the other Windows), and they each have outputs for their IPs (this one for Linux, Windows has target_windows instead of target):
output "target_ips" {
value = aws_instance.target.*.private_ip
}
I want to combine the two outputs into a single output for easier access later own the line. Anyone know how to do this?
Terraform's setunion function can combine two sets into a new set containing all of the values from both sets:
output "target_ips" {
value = setunion(
aws_instance.target.*.private_ip,
aws_instance.target_windows.*.private_ip,
)
}
In this case there can't be common values in both of these inputs by definition, because the IP addresses are unique per instance, but for completeness I'll note that due to the rules of sets the setunion result will only include each unique value once, even if it appears in both of the inputs.

How to remove network addresses from a list if they are a subnet of another network

After creating a list of ipaddress/CIDR from a csv file, converting the ipaddresses to network addresses and then eliminating duplicates by creating a set from the list (python 3.7), I iterate and eliminate all subnets that are subnets_of() another subnet, keeping the summarized or supernet address. I use the ipaddress module to do this work. The problem is, if the subnet is compared to itself, it still counts as a subnet. for example,
a = ipaddress.ip_network('192.168.0.0/24')
b = ipaddress.ip_network('192.168.0.0/24')
b.subnet_of(a)
True
So I even if there is a 192.168.0.0/23 in my list, the /24 is still added because all addresses are compared to all addresses. Is there a better to handle this type of situation?
I've tried removing the the subnet from my working list so it won't be iterated over again, no luck.
No error messages. I just get a subnet included that fits within a larger subnet in my list. This leaves an entry that is unnecessary.
have you tried just removing everything after the /?

Getting an intersection between 2 CIDR spaces when you have huge data sets

Basically, I have a list of IP subnets (supernets) which contains around 100 elements. In the same time, I have another list (ips) which contains around 300k of IP addresses/networks.
Example:
supernets = ['10.10.0.0/16', '12.0.0.0/8']
ips = ['10.10.10.1', '10.10.10.8/30', '12.1.1.0/24']
The end goal is to classify the IP addresses based upon where they fall in the supernet.
So what I did is to compare every IP addresses/network element in the 2nd list to the first element in the supernet lists and so on.
Baically, I do this:
for i in range(len(supernets)):
for x in ips:
if IPNetwork(x) in IPNetwork(sorted(supernets)[i]):
print(i, x, sorted(supernets)[i])
lod[i][sorted(supernets)[i]].append(x)
This works fine, but it take ages and the CPU goes crazy, so my question is, is there any methodology or clean code that can achieve this and save time?
UPDATE
I have sorted the lists and used list comprehension instead, and the
script took around 11mins to run which is a good optimization in terms
of speed. But the CPU is still 100% during the whole 11mins.
[lod[i][public[i]].append(x) for i in range(len(public)) for x in ips if IPNetwork(x) in IPNetwork((public)[i])]

Optimal check if IP is in subnet

I want to check if an IP address belongs to a subnet. The pain comes when I must check against 300.000 CIDR blocks having subnets ranging from /3 to /31, several million times / second.
Take https://github.com/indutny/node-ip for example:
I could ip.cidrSubnet('ip/subnet') for each all of the 300.000 blocks and check if the IP I'm looking for is inside the first-last address range, but this is very costly.
How can I optimally check if an IP address belongs to one of these blocks, without looping everytime through all of them?
Store the information in a binary tree that is optimized for range checks.
One naive way to do it is to turn each CIDR block into a pair of events, one when you enter the block, one when you exit the block. Then sort the list of events by IP address. Run through it and create a sorted array of IP addresses and how many blocks you are in. For 300,000 CIDR blocks there will be 600,000 events, and your search will be 19-20 lookups.
Now you can do a binary search of that file to find the last transition before your current IP address, and return true/false depending on whether that was in one or more blocks versus in none.
The lookup will be faster if instead of searching a file, you are searching a dedicated index of some sort. (The number of lookups in the search is the same or slightly higher, but you make better use of CPU caches.) I personally have used BerkeleyDB's BTree data structure for this kind of thing in other languages, and have been very happy.

Resources