I'm reviewing firewall rules. The rules appear to be attached by tag, is there a way to identify firewall rules to which there is no corresponding resource?
Objects "VM instance" and "Firewall rule" have a "Network tag" attribute, that logically binds them:
CloudShell:$ gcloud compute instances describe lamp-1-vm --zone=us-central1-f
...
tags:
items:
- lamp-1-deployment
CloudShell:$ gcloud compute firewall-rules describe my-http-enable
...
targetTags:
- lamp-1-deployment
You may use gcloud and some shell scripting to build a list of firewall rules with network tags, and a list of instances with tags, and then use a loop to seek for firewall rules whose tags are not in use.
Here you'll find some useful examples:
Filtering and formatting fun with gcloud, GCP’s command line interface
I had a go at building a solution to this puzzle which can be found in the public repo here:
https://github.com/kolban-google/firewall-instances
The docs for it are:
Within a GCP project we can define firewall rules. These rules can be associated with compute
engine instances through the use of tags. In a firewall rule, we can specify a set of one or more
named tags and the rule will be applied only if a tag in the firewall rule matches a tag associated
with a compute engine. As our project grows, we may end up with lots of firewall rules and we may
find ourselves asking the question "Are there any firewall rules which have no matching
compute engine instances?". We can manually examine each firewall rule and then look to see if
there are any matching instances but this is laborious and error prone. In this project we describe
a sample tool that dynamically retrieves the current firewall rules and then automatically searches
for matching compute engine instances that have the corresponding tag.
To run the tool download and then:
npm install
node index.js --projectNum [projectNum]
where projectNum is the numeric id of a project. The result is a JSON string of the format:
[
{
"name": "[FIREWALL_RULE_NAME]",
"instances": [
"INSTANCE_NAME",
...
]
},
...
]
If a firewall rule has no matching instances, the instances field will not be populated.
From an algorithm perspective:
Get the list of all firewall rules;
For each of the firewall rules {
Get the networkTags for that rule;
Search for all compute instances that have one or more of those tags;
List the rule and the associated compute instances that have the tags;
}
This project is provided as-is as an example.
Related
What is the proper way to create a resource group in terraform for azure that can be shared across different modules? I've been banging my head against this for a while and it's not working. As you can see in this image. I have a resource group in a separate folder. In my main.tf file i load the modules appservice and cosmosdb. I cant seem to figure out how to make the appservice and cosmosdb tf files reference the resource group here that is in this location. How is this done? Any suggestions would be greatly appreciated. Thank you.
In general, it is not recommended to have a module with a single resource like you have organized your code. However, in this situation, you would need to provide the exported resource attributes as an output for that module. In your resource_group module:
output "my_env_rg" {
value = azurerm_resource_group.rg
description = "The my-env-rg Azure resource group."
}
Then, the output containing the map of exported resource attributes for the resource becomes accessible in a config module where you have declared the module. For example, in your root module config (presumably containing your main.tf referenced in the question):
module "azure_resource_group" {
source = "resource-group"
}
would make the output accessible with the namespace module.<MODULE NAME>.<OUTPUT NAME>. In this case, that would be:
module.azure_resource_group.my_env_rg
There's two different kinds of sharing that require different solutions. You need to decide which kind of sharing you're looking for because your example isn't very illustrative.
The first is where you want to make a pattern of creating things that you want to use twice. The goal is to create many different things, each with different parameters. The canonical example is a RDS instance or an EC2 instance. Think of the Terraform module as a function where you execute it with different inputs in different places and use the different results independently. This is exactly what Terraform modules are for.
The second is where you want to make a thing and reference it in multiple places. The canonical example is a VPC. You don't want to make a new VPC for every autoscaling group - you want to reuse it.
Terraform doesn't have a good way of stitching the outputs from one set of resources as inputs to another set. Atlas does and Cloudformation does as well. If you're not using those, then you have to stitch them together yourself. I have always written a wrapper around Terraform which enables me to do this (and other things, like validation and authentication). Save the outputs to a known place and then reference them as inputs later.
Suppose I have a website that is served by an Azure CDN endpoint (via files that have been uploaded to blob storage).
I want the minified website content to be available to everyone -- that part is easy, since that's what the CDN does by default.
Ideally, I would also have the sourcemaps available on that same CDN (so that the default behavior of //# sourceMappingURL=0-8d1d0e3cc4594b2c2758.js.map within my JS files would "just work"). However, I'd like for those sourcemaps to only be served to a subset of users.
Is there a way of accomplishing this scenario? I'm happy to defined "subset" in any way that would make this scenario work (e.g., being connected to a certain VPN or being in a certain IP-address range; or using Fiddler to set a secret header; etc.)
Thanks!
I assume that what you need is to build a system that, in production, allows to offer sourcemaps to a certain group of users, for instance, a team of developers, but not to everyone, the sourcemaps should not be publicly accessible.
There are different alternatives that can help achieve this goal.
On the one hand, we can try to use a rules engine that analyzes the received HTTP traffic and offers one or the other response depending on the criteria deemed appropriate.
These rules engines allows you to customize how HTTP requests are handled, by defining a set of possible match condition(s) on the incoming requests, and actions to be performed if the match condition(s) apply.
Azure CDN provides two types of rules engines, one standard rule engine for Azure CDN from Microsoft, and other premium from Verizon, which provide more advanced features.
How you use these rule engines depends largely on how you need to identify your user group and what you want to do to condition the response offered by your application to a sourcemap request.
For instance, one of the standard rule engines match conditions - also available in the premium rule engine - is the remote IP address where the request comes from: maybe it could be a good criterion to discriminate between your different subsets of users.
Or, as you suggested with the use of Fiddle, you can analyze incoming request header in search of a custom one.
The Azure CDN Verizon Premium rule engine provides more advanced match conditions based in browser, device type, etcetera.
Once the users have been identified, the system must consider the action to take depending on whether they belong to one or another group.
Both the standard and Verizon rules engines provides that could be relevant for this purpose.
I think that the best option, if you can use the Verizon rule engine, will be to deny access to the HTTP requests send by users that does not belong to the group allowed to access the sourcemaps.
Other options, although I think more difficult to implement if your are working with webpack and SPA, can be redirect the requests received from one subset of users to certain files which contains the sourcemaps - or to different index.html pages if you are using SPA in your frontend, each with different js and css resources, with sourcemaps or not -, or rewrite the URL to directly deliver a different set of files.
Another possible action could be to not include the inline sourcemap location in your minified files and to take advantage of the capabilities to modify response headers and Append a SourceMap header that points to the actual sourcemaps instead. This header will only be sent for the desired user group. Again, depending of how you are building your frontend it could not be an easy task.
Finally, if you are using Webpack and the SourceMapDevToolPlugin to build your frontend, you can use the publicPath option to point, in production, your sourcemaps to a non public, more developer oriented, URL location. This is the approach followed in this article. I think this approach is also worth looking into.
I'm working on a service that I want to use to monitor tags and enforce tagging policies.
One planned feature is to detect resources that are tagged with a value that is not allowed for the respective key.
I can already list the ARNs of resources that have a certain tag-key and I am now looking to filter this list of resources according to invalid values. To do that I want to query a list of each resources tags using its ARN and then filter by those that have invalid values in their tags.
I have
[{
"ResourceArn":"arn:aws:ec2:eu-central-1:123:xyz",
"ResourceType":"AWS::Service::Something
}, ...]
and I want to do something like
queryTags("arn:aws:ec2:eu-central-1:123:xyz")
to get the tags of the specified resource.
I'm using nodejs, but I'm happy to use a solution based on the AWS cli or anything else that can be used in a script.
You can use that through awscli.
For example, EC2 has the command describe-tags for listing the tags of resources and I think other resources also have command like this. It also has options that meet your need.
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-tags.html
I am trying to create a virtual machine from a template in Azure, there are two fields which is blocking me (CIDR field), can someone please have a look into it. unable to by-pass the notion.
you must specify you address as a string, not an array:
"10.0.0.0/24"
not sure about the second one, probably the same issue
CIDR: https://es.wikipedia.org/wiki/Classless_Inter-Domain_Routing
Well looking at question, comment and an answer that was provided.
First the problem why you template errored out was because of the values that you put in the template. If you do not specify a value you should be able to just put the information like 10.0.0.0/24 when filling out template and that will work.
Note that the subnets section is only asking for CIDR notation for subnet range.
I'm currently working on a Terraform project to automate infrastructure in AWS. Since we are using a pretty consistent pattern, my idea was to create custom Terraform resources which are composed of multiple AWS resources to DRY things up.
Is there a way within custom Go Terraform resources which are simply composed of multiple AWS resources under the hood? I'd like to have a resource named something like app_stack which is composed of an auto-scaling group, an elastic load balancer, and a Route 53 name. I'd like my module to only accept a bare minimum of parameters so that it shields end-users from the implementation details.
Is this possible in Terraform?
I think you want to use a Terraform module. A module is a collection of resources that is managed as a group.
You can expose whatever variables are necessary for the module to work, so in your case the DNS name, how many instances you want in the ASG, etc. Then when you include it in your terraform config you can specify the block, e.g:
module "myapp" {
source = "./app_stack"
dns_name = "myapp.example.com"
instances = 5
}
module "meteordemo" {
source = "./app_stack"
dns_name = "meteor.example.com"
instances = 1
}
The docs include a much more comprehensive explanation. Here are some example modules on github for reference.