BUG:: Azure Resource Groups Validation Rules NOT WORKING - azure

[First of all, it is very sad that BizSpark susbcription do not have any technical support, even to inform a error like this :-((( ]
Ok, well , the error, that occurs twice creating a VirtualMachine, so replicated:
{
"error": {
"code": "InvalidParameter",
"target": "resourceGroupName",
"message": "The entity name 'resourceGroupName' is invalid according to its validation rule: ^[^_\\W][\\w-._]{0,79}(?<![-.])$."
}
}
The reason, it is because my resource group is called _ReGr_MyName,
but IT ALREADY EXIST !!!
(indeed the rest of resources, like Public-Ip, Storage Accounts, etc, are already under that resource group)
so seems like validation rules are inconsistent across different resources
I can provide the Operation-Id or the TRacking-ID if necesary
But please, solve this short of issues, Azure should be an stable system

In general, avoid having any special characters (- or _) as the first or last character in any name. These characters will cause most validation rules to fail.
For detailed information, please check this article.
Since we can't edit the name of created resource group, you may need to create a new resource group and move the resources into it.
Here is a good answer from Zain Rizvi.

Related

Is it possible to use high level Azure Policy Allowed Resources types?

I am attempting to create a policy to lock down resource groups and would like to just use a list of high level resource types instead of trying to granularly assign types as there are hundreds. is this possible? Using a powershell script I pulled the list of types and was trying to just use top level ones such as:
["Microsoft.KeyVault","Microsoft.AzureData","Microsoft.Billing","Microsoft.Cache","Microsoft.Consumption","Microsoft.ContainerInstance","Microsoft.ContainerRegistry","Microsoft.ContainerService","Microsoft.DBforPostgreSQL","Microsoft.DevOps","Microsoft.MachineLearning","Microsoft.ServiceBus","Microsoft.Sql","Microsoft.SqlVirtualMachine","Microsoft.Storage","Microsoft.Web"]
but it didn't validate it. Is there a possibility of a syntax error in my array, am I doing something else wrong, or is it just straight up not possible?
Maybe you can use "notContains" to check the resource type:
{
"field": "type",
"notContains": "Microsoft.KeyVault"
}

terraform 0.13.5 resources overwrite each other on consecutive calls

I am using terraform 0.13.5 to create aws_iam resources
I have 2 terraform resources as follows
module "calls_aws_iam_policy_attachment" {
# This calls an external module to
# which among other things creates a policy attachment
# resource attaching the roles to the policy
source = ""
name = "xoyo"
roles = ["rolex", "roley"]
policy_arn = "POLICY_NAME"
}
resource "aws_iam_policy_attachment" "policies_attached" {
# This creates a policy attachment resource attaching the roles to the policy
# The roles here are a superset of the roles in the above module
roles = ["role1", "role2", "rolex", "roley"]
policy_arn = "POLICY_NAME"
name = "NAME"
# I was hoping that adding the depends on block here would mean this
# resource is always created after the above module
depends_on = [ module.calls_aws_iam_policy_attachment ]
}
The first module creates a policy and attaches some roles. I cannot edit this module
The second resource attaches more roles to the same policy along with other policies
the second resource depends_on the first resource, so I would expect that the policy attachements of the second resource always overwrite those of the first resource
In reality, the policy attachments in each resource overwrite each other on each consecutive build. So that on the first build, the second resources attachments are applied and on the second build the first resources attachements are applied and so on and so forth.
Can someone tell me why this is happening? Does depends_on not work for resources that overwrite each other?
Is there an easy fix without combining both my resources together into the same resource?
As to why this is happening:
during the first run terraform deploys the first resources, then the second ones - this order is due to the depends_on relation (the next steps work regardless of any depends_on). The second ones overwrite the first ones
during the second deploy terraform looks at what needs to be done:
the first ones are missing (were overwritten), they need to be created
the second ones are fine, terraform ignores them for this update
now only the first ones will be created and they will overwrite the second ones
during the third run the same happens but the exact other way around, seconds are missing, first are ignored, second overwrite first
repeat as often as you want, you will never end up with a stable deployment.
Solution: do not specify conflicting things in terraform. Terraform is supposed to be a description of what the infrastructure should look like - and saying "this resource should only have property A" and "this resource should only have property B" is contradictory, terraform will not be able to handle this gracefully.
What you should do specifically: do not use aws_iam_policy_attachment, basically ever, look at the big red box in the docs. Use multiple aws_iam_role_policy_attachment instead, they are additive, they will not overwrite each other.

Terraform Data Source Meaning

I am new to Terraform and trying to understand data sources. I have read the documentation and this StackOverflow post, but I'm still unclear about the use cases of data source.
I have the following block of code:
resource "azurerm_resource_group" "rg" {
name = "example-resource-group"
location = "West US 2"
}
data "azurerm_resource_group" "test" {
name = "example-resource-group"
}
But I get a 404 error:
data.azurerm_resource_group.test: data.azurerm_resource_group.test: resources.GroupsClient#Get: Failure responding to request:
StatusCode=404 -- Original Error: autorest/azure: Service returned an
error. Status=404 Code="ResourceGroupNotFound" Message="Resource group
'example-resource-group' could not be found."
I don't understand why the resource group is not found. Also, I am unclear about the difference between data and variable and when should I use which.
Thanks
I have provided a detailed explanation of what a data source is in this SO answer. To summarize:
Data sources provide dynamic information about entities that are not managed by the current Terraform configuration
Variables provide static information
Your block of code doesn't work because the resource your data source is referencing hasn't been created yet. During the planning phase, Terraform will try to find a resource group named example-resource-group, but it won't find it, and so it aborts the whole run. The ordering of the blocks makes no difference to the order they are applied.
If you remove the data block, run terraform apply, and then add the data block back in, it should work. However, data sources are used to retrieve data about entities that are not managed by your Terraform configuration. In your case, you don't need the data.azurerm_resource_group.test data source, you can simply use the exported attributes from the resource. In the case of azurerm_resource_group, this is a single id attribute.
Think of a data source as a value you want to read from somewhere else.
A variable is something you define when you run the code.
When you use the data source for azurerm_resource_group terraform will search for an existing resource that has the name you defined in your data source block.
Example
data "azurerm_resource_group" "test" {
name = "example-resource-group"
}
Quoting #ydaetskcoR from the comment below about 404 error:
It's 404ing because the data source is running before the resource
creates the thing you are looking for. You would use a data source
when the resource has already been created previously, not in the same
run as the resource you are creating.

Check whether or not a query parameter is set

In Azure API Management, I need to check whether or not a query parameter is set. To achieve this, I'm trying to use context.Request.Url.Query.GetValueOrDefault(queryParameterName: string, defaultValue: string).
According to the documentation, this expression works as follows -
Returns comma separated query parameter values or defaultValue if the parameter is not found.
With that in mind, I used the example from the MS blog Policy Expressions in Azure API Management, to create the following <inbound> policy -
<set-variable name="uniqueId" value="#(context.Request.Url.Query.GetValueOrDefault("uniqueId", ""))" />
However, whenever I include this policy, execution fails with 404 Resource Not Found. Upon inspection of the trace, I can see that the execution was aborted without error before a single policy was evaluated (no matter where within <inbound> the above policy is placed.
This behavour results in the following <backend> trace, which explains the 404
"backend": [
{
"source": "configuration",
"timestamp": "2017-09-07T12:42:13.8974772Z",
"elapsed": "00:00:00.0003536",
"data": {
"message": "Unable to identify Api or Operation for this request. Responding to the caller with 404 Resource Not Found."
}
}
],
Given that the MS documentation seems to be inaccurate, how can I check whether or not a query parameter is set?
So the answer here is that there is (another) MS bug.
When the API operation was originally created, the uniqueId query parameter was set as required. I changed this so that it was not required before adding the policy described in my question, however a bug within the new Azure Portal means that when you uncheck the Required box adjacent to the query parameter and then save your changes, they are ignored.
I was able to work around this behaviour be editng the YAML template in the OpenAPPI specification view, removing the declaration required: true for the query parameter in question. The expresion within my policy now works as expected.
Please note: that this workaround sheds light on yet another bug, where saving the template results in your policies being deleted, so make sure you take a copy first.

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown

Exception of type 'Microsoft.WindowsAzure.StorageClient.StorageClientException' was thrown.
Sometimes even if we have the fabric running and the role manager is up, we get an exception of this sort.
The code breaks at the line:
emailAddressClient.CreateTableIfNotExist("EmailAddress");
public EmailAddressDataContext(CloudStorageAccount account) :
base(account.TableEndpoint.AbsoluteUri, account.Credentials)
{
this.storageAccount = account;
CloudTableClient emailAddressClient =
new CloudTableClient(storageAccount.TableEndpoint.AbsoluteUri,
storageAccount.Credentials);
emailAddressClient.CreateTableIfNotExist("EmailAddress");
}
I give Windows Azure tables camel-cased names all the time without issues.
I wonder if by chance you already used this table name and recently deleted it? For a time after deletion (when the table is still being deleted asynchronously), you won't be able to recreate it. I believe 409 Conflict is the error code to expect in that case.
I agree with Steve Marx, casing does not seem to affect this issue. In fact Microsoft's Azure diagnostics tables are created with unusual casing eg: WADPerformanceCounters. I get the problem even in the dev environment. So it is something else entirely - my opinion.
Error fixed in my case: The problem was an error with the connection string as defined in (or lack thereof) in the webrole or workerrole project properties.
Fix:
Right-click on the webrole under "Roles" folder in your cloud application. Select "Properties" from the context menu.
Select the "Settings" tab.
Verify or Add a setting for you connection string that you will use to initialize table storage.
Mine was a simple error - no setting for my connection string.
Easy fix is to change "EmailAddress" to "Emailaddress". For some reasons it would not allow CamelCasing. So please make sure, you just have one capital letter in the name of the table that too at the beginning. Since the table names are case insensitive, you can also name it as 'emailaddress'

Resources