CloudFormation without snapshot - amazon-ami

Cloudformation created a template for us which specifies both the AMI instance to start from, and also the snapshot ID of that AMI instance.
We create our base AMI instance with Packer, which reports the AMI instance it creates, but does not report the snapshot associated - we find that in the Amazon UI.
Can the Cloudformation template be modified so it does not specify the snapshot ID? Can you give an example of the stanza?

Sure you can! For example, something like this would work:
"Resources": {
"someEC2": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "...valid_ami_id...",
"InstanceType": "m3.medium",
"KeyName": "...",
"Monitoring": "false",
"NetworkInterfaces": [
{
...
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda",
"Ebs": {
"VolumeSize": 10
}
}
]
}
}
}

Related

BICEP - Parameter file variable assignment

I was following the repo for separate parameter file to each env as defined in the https://github.com/Azure/bicep/discussions/4586
I tried the separate parameters file for dev, stage, prod but the value assignment in main module variable remains flagged by intelligence even though it exists same param exist in the respective parameter file.
Other approach I tried is loadjson variable, but it does not show auto completion for items under subnet block as it stopes right after value.
Maybe I am overthinking and not applying the correct approach, Perhaps I should ignore intellisense and try deploying by applying parameter and hope it will auto pick correct value during the deployment param check.
Here is my parameter file and the same value applies to each env param json.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"department": {
"value": "finance"
},
"saAccountCount": {
"value": 1
},
"vmCount": {
"value": 1
},
"locationIndex": { //idenx 1 = app server, 2=AD, 3=Tool server, 4= dchp server
"value": 1
},
"appRoleIndex": { //idenx 1 = westus2, 2= westus, 3= eastus, 4=centralus, 5=uswest3
"value": 1
},
"appRole": {
"value": {
"Applicatoin Server": "ap",
"Active Directory": "dc",
"Tool server": "tool",
"DHCP server": "dhcp"
}
},
"environment": {
"value": "dev"
},
"addressPrefixes": {
"value": [
"172.16.0.0/20"
]
},
"dnsServers": {
"value": [
"1.1.1.1",
"4.4.4.4"
]
},
"locationList": {
"value": {
"westus2": "azw2",
"westus": "azw",
"Eastus": "aze",
"CentralUS": "azc",
"westus3": "azw3"
}
},
"subnets": {
"value": [
{
"name": "frontend",
"subnetPrefix": "172.16.2.0/24",
"delegation": "Microsoft.Web/serverfarms",
"privateEndpointNetworkPolicies": "disabled",
"serviceEndpoints": [
{
"service": "Microsoft.KeyVault",
"locations": [
"*"
]
},
{
"service": "Microsoft.Web",
"locations": [
"*"
]
}
]
},
{
"name": "backend",
"subnetPrefix": "172.16.3.0/24",
"delegation": "Microsoft.Web/serverfarms",
"privateEndpointNetworkPolicies": "enabled",
"serviceEndpoints": [
{
"service": "Microsoft.KeyVault",
"locations": [
"*"
]
},
{
"service": "Microsoft.Web",
"locations": [
"*"
]
},
{
"service": "Microsoft.AzureCosmosDB",
"locations": [
"*"
]
}
]
}
]
}
}
}
You appear to be attempting to deploy an Azure Resource Management (ARM) template using a parameter file.
The parameter file is used to pass values to the ARM template during deployment. The parameter file must use the same types as the ARM template and can only include values for the ARM template's parameters.
You will receive an error if the parameter file contains extra parameters that do not match the ARM template's parameters.
In the same deployment process, you can use both inline parameters and a local parameter file. If you specify a parameter's value in both the local parameter file and inline, the inline value takes priority.
Refer to create a parameter file of an ARM template
About the different parameters file for dev, stage, and prod, it's likely that the parameter file is not correctly linked to the ARM template.
You can deploy the ARM template with the parameter file to determine if it will automatically select the proper value during the deployment parameter check.
Regarding the loadjson variable, it is possible that the loadjson variable is not properly formatted.
You can double-check the loadjson variable's format to ensure it's proper.
After a workaround on this, I created a sample parameter.json file for a webapp to deploy in a production environment and that worked for me.
Note: Alternatively, You can use az deployment group create with a parameters file and deploy into Azure to avoid these conflicts.

applicationGatewayBackendAddressPools configurations does not apply in virtual machine scale set

I have a VMSS which I deployed using ARM templates. This is the networkProfile block under VMSS resource section.
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[variables('nicName')]",
"properties": {
"primary": true,
"ipConfigurations": [
{
"name": "[concat(variables('VMSSName'), '-ipconfig')]",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"applicationGatewayBackendAddressPools": "[variables('AppGatewayBackendAddressPool')]"
}
}
]
}
}
]
},
In Variable section, if I use resourceId() function and provide values from parameters then it does not apply the configuration in VMSS. for example:
"AppGatewayBackendAddressPool": "[resourceId(parameters('VirtualNetworkResourceGroup'),'Microsoft.Network/applicationGateways/backendAddressPools', parameters('ApplicationGatewayName'), parameters('BackendAddressPool'))]",
I've also tried adding parameters('SubscriptionName') but the result is same.
"AppGatewayBackendAddressPool": "[resourceId(parameters('SubscriptionName') ,parameters('VirtualNetworkResourceGroup'),'Microsoft.Network/applicationGateways/backendAddressPools', parameters('ApplicationGatewayName'), parameters('BackendAddressPool'))]",
When I declare variable like below then it applies backendAddressPool configuration in Networking -> Load Balancing.
"AppGatewayBackendAddressPool": [
{ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/applicationGateways/<applicationGatewayName>/backendAddressPools/<backendAddressPool>" }
],
Similar I'm doing with subnetRef like below and that is working fine.
"subnetRef": "[resourceId(parameters('VirtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks/subnets', parameters('VirtualNetworkName'), parameters('SubnetName'))]",
I want to parametrize the deployment by defining separate parameters.json file so I can attach applicationGatewayBackendAddressPools with different virtual machine scale sets.
This is how I achieved it by following Ked Mardemootoo answer.
IP configuration section under networkProfile of VMSS resource.
"ipConfigurations": [
{
"name": "[concat(variables('VMSSName'), '-ipconfig')]",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"applicationGatewayBackendAddressPools": [
{ "id": "[concat(parameters('AapplicationGatewayExternalid'), '/backendAddressPools/', parameters('BackendAddressPool'))]" }
]
}
}
]
Template file parameters:
"BackendAddressPool": {
"type": "string",
"metadata": {
"description": "Backend pool to host blue/green vmss."
}
},
"AapplicationGatewayExternalid": {
"type": "string",
"metadata": {
"description": "Application Gateway Id."
}
}
Now, ARM template is calling and referencing applicationGatewayBackendAddressPools attribute dynamically under VMSS' resource section.
I have these two parameters in parameters.json file where I can define values according to environment.
"BackendAddressPool": {
"value": "<backendPoolName>"
},
"AapplicationGatewayExternalid": {
"value": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/applicationGateways/<ApplicationGatewayName>"
}
Overriding template variables in release pipeline vars:
overriding template vars
Defining in pipeline vars
pipeline var
You seem to be missing the concat in the variables. Looking at the raw json on my end, this is how it's configured. See if you can do something similar, and convert the subnet name and backend address pool to variables.
"ipConfigurations": [
{
"name": "ip-vmss-name",
"properties": {
"primary": true,
"subnet": {
"id": "[concat(parameters('virtualNetworks_vnet_externalid'), '/subnets/snet-vm')]"
},
"privateIPAddressVersion": "IPv4",
"applicationGatewayBackendAddressPools": [
{
"id": "[concat(parameters('applicationGateways_agw_1_externalid'), '/backendAddressPools/be-addr-pool-vmss-1')]"
}
]
}
}
]
Nothing seems wrong with your variables/parameters call but applicationGatewayBackendAddressPools is not a valid attribute for neither VMSS nor Application Gateway.
You can do it check AKS and Application Gateway documentations. I achieve the same goal by setting backendAddressPools, which is in Application Gateway section, in different parameters.json files.

Deployment of CosmosDB with shared autoscaling throughput fails

Trying to deploy ARM Template for a Database Account, SQL Database with two Collections where autoscale throughput setting are set at the database level (shared for collections).
I created this setup in Azure UI and then exported the template.
When importing the template from Powershell using New-AzResourceGroupDeployment it fails with message
Status Message: Entity with the specified id does not exist in the system.
ActivityId: <redacted>, Microsoft.Azure.Documents.Common/2.11.0 (Code:NotFound)
This is ridiculous because I exported the template and did not modify it and then imported. Isn't Azure recognizing it's own format?
I think the problem has to do with this fragment of template:
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings",
"apiVersion": "2020-04-01",
"name": "[concat(parameters('databaseAccounts_an_test_name'), '/', parameters('databaseAccounts_an_test_name'), '-db-2/default')]",
"dependsOn": [
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases', parameters('databaseAccounts_an_test_name'), concat(parameters('databaseAccounts_an_test_name'), '-db-2'))]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('databaseAccounts_an_test_name'))]"
],
"properties": {
"resource": {
"throughput": 400,
"autoscaleSettings": {
"maxThroughput": 4000
}
}
}
}
Any ideas?
Based on Mark Brown hints this is the exact solution.
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases",
"name": ...
"apiVersion": "2020-04-01"
"dependsOn": ...
"properties": {
"resource": {
"id": ...
},
"options": {
"autoscaleSettings": {
"maxThroughput": 4000
}
}
}
}
Don't use the Microsoft.DocumentDB/databaseAccounts/sqlDatabases/throughputSettings part of yaml from exported template. I'm not sure why Azure exports it and then doesn't allow for import.
If you are creating a new database or container resource you need to pass the throughput in the options for the resource. You can only use the throughput resource directly when updating the throughput.
Here is an example here

Is there a way to specify target log files for microsoft monitoring agent to listen and pick up the logs from code?

I am considering the use of Microsoft monitoring agent to collect some log records from log files on the system and send them to a log analytics workspace.
Is there a way specifying target files(custom log files) the agent would listen to and stream the logs directly to azure workspace.
I know this is possible to do through azure portal by adding an additional data source in the workspace(as specified by this link https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-custom-logs).
I am looking for a way to configure these data sources from c# code/powershell script.(possibily api or sdk that i am not aware of ).
To add custom logs Use New-AzOperationalInsightsCustomLogDataSource.
Here are theother powershell commandlets which can be handy to query and create LogAnalytics Datasource.
get-azoperationalinsightsdatasource
New-AzOperationalInsightsApplicationInsightsDataSource
New-AzOperationalInsightsAzureActivityLogDataSource
New-AzOperationalInsightsComputerGroup
New-AzOperationalInsightsCustomLogDataSource
New-AzOperationalInsightsLinuxPerformanceObjectDataSource
New-AzOperationalInsightsLinuxSyslogDataSource
New-AzOperationalInsightsSavedSearch
New-AzOperationalInsightsStorageInsight
New-AzOperationalInsightsWindowsEventDataSource
New-AzOperationalInsightsWindowsPerformanceCounterDataSource
https://learn.microsoft.com/en-us/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource?view=azps-2.7.0
Also find the link for the Log analytics Rest API's which can be used easily with C# code.
https://learn.microsoft.com/en-us/rest/api/loganalytics/
https://learn.microsoft.com/en-us/rest/api/loganalytics/datasources/createorupdate
Powershell
Custom Log to collect
Link : https://learn.microsoft.com/en-us/azure/azure-monitor/platform/powershell-workspace-configuration
$CustomLog = #"
{
"customLogName": "sampleCustomLog1",
"description": "Example custom log datasource",
"inputs": [
{
"location": {
"fileSystemLocations": {
"windowsFileTypeLogPaths": [ "e:\\iis5\\*.log" ],
"linuxFileTypeLogPaths": [ "/var/logs" ]
}
},
"recordDelimiter": {
"regexDelimiter": {
"pattern": "\\n",
"matchIndex": 0,
"matchIndexSpecified": true,
"numberedGroup": null
}
}
}
],
"extractions": [
{
"extractionName": "TimeGenerated",
"extractionType": "DateTime",
"extractionProperties": {
"dateTimeExtraction": {
"regex": null,
"joinStringRegex": null
}
}
}
]
}
"#
# Custom Logs
New-AzOperationalInsightsCustomLogDataSource -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName -CustomLogRawJson "$CustomLog" -Name "Example Custom Log Collection"
ARM Template
For the Arm template format for the custom logs will be as below. See the detailed link https://learn.microsoft.com/en-us/azure/azure-monitor/platform/template-workspace-configuration
{
"apiVersion": "2015-11-01-preview",
"type": "dataSources",
"name": "[concat(parameters('workspaceName'), parameters('customlogName'))]",
"dependsOn": [
"[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
],
"kind": "CustomLog",
"properties": {
"customLogName": "[parameters('customlogName')]",
"description": "this is a description",
"extractions": [
{
"extractionName": "TimeGenerated",
"extractionProperties": {
"dateTimeExtraction": {
"regex": [
{
"matchIndex": 0,
"numberdGroup": null,
"pattern": "((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9]"
}
]
}
},
"extractionType": "DateTime"
}
],
"inputs": [
{
"location": {
"fileSystemLocations": {
"linuxFileTypeLogPaths": null,
"windowsFileTypeLogPaths": [
"[concat('c:\\Windows\\Logs\\',parameters('customlogName'))]"
]
}
},
"recordDelimiter": {
"regexDelimiter": {
"matchIndex": 0,
"numberdGroup": null,
"pattern": "(^.*((\\d{2})|(\\d{4}))-([0-1]\\d)-(([0-3]\\d)|(\\d))\\s((\\d)|([0-1]\\d)|(2[0-4])):[0-5][0-9]:[0-5][0-9].*$)"
}
}
}
]
}
}

ECS CLI - You cannot specify an IAM role for services that require a service linked role

I'm trying to deploy a container to ECS (Fargate) via aws cli. I'm able to create the task definition successfully, the problem comes when I want to add a new service to my Fargate cluster.
This is the command a execute:
aws ecs create-service --cli-input-json file://aws_manual_cfn/ecs-service.json
This is the error that I'm getting:
An error occurred (InvalidParameterException) when calling the CreateService operation: You cannot specify an IAM role for services that require a service linked role.`
ecs-service.json
{
"cluster": "my-fargate-cluster",
"role": "AWSServiceRoleForECS",
"serviceName": "dropinfun-spots",
"desiredCount": 1,
"launchType": "FARGATE",
"networkConfiguration": {
"awsvpcConfiguration": {
"assignPublicIp": "ENABLED",
"securityGroups": ["sg-06d506f7e444f2faa"],
"subnets": ["subnet-c8ffcbf7", "subnet-1c7b6078", "subnet-d47f7efb", "subnet-e704cfad", "subnet-deeb43d1", "subnet-b59097e8"]
}
},
"taskDefinition": "dropinfun-spots-task",
"loadBalancers": [
{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:************:targetgroup/dropinfun-spots-target-group/c21992d4a411010f",
"containerName": "dropinfun-spots-service",
"containerPort": 80
}
]
}
task-definition.json
{
"family": "dropinfun-spots-task",
"executionRoleArn": "arn:aws:iam::************:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS",
"memory": "0.5GB",
"cpu": "256",
"networkMode": "awsvpc",
"requiresCompatibilities": [
"FARGATE"
],
"containerDefinitions": [
{
"name": "dropinfun-spots-service",
"image": "************.dkr.ecr.us-east-1.amazonaws.com/dropinfun-spots-service:latest",
"memory": 512,
"portMappings": [
{
"containerPort": 80
}
],
"essential": true
}
]
}
Any idea on how to manage this linked-role error?
Since you are trying to create Fargate launch type tasks, you set the network mode to awsvpc mode in task definition (Fargate only support awsvpc mode).
In your ecs-service.json, I can see that it has "role": "AWSServiceRoleForECS". It seems that you are trying to assign a service role for this service. AWS does not allow you to specify an IAM role for services that require a service linked role.
If you assigned the service IAM role because you want to use a load balancer, you can remove it. Because task definition that use awsvpc network mode use service-linked role, which is created for you automatically[1].
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-service-linked-roles.html#create-service-linked-role
Instead of specifying "role": "AWSServiceRoleForECS"
you can specify taskRoleArn in addition to executionRoleArn if you want to assign a specific role to your service (container). It will be useful if you want your container to access other AWS services on your behalf.
task-definition.json
{
"family": "dropinfun-spots-task",
"executionRoleArn": "arn:aws:iam::************:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS",
"taskRoleArn" : "here_you_can_define_arn_of_a_specific_iam_role"
"memory": "0.5GB",
"cpu": "256",
"networkMode": "awsvpc",
"requiresCompatibilities": [
"FARGATE"
],
"containerDefinitions": [
{
"name": "dropinfun-spots-service",
"image": "************.dkr.ecr.us-east-1.amazonaws.com/dropinfun-spots-service:latest",
"memory": 512,
"portMappings": [
{
"containerPort": 80
}
],
"essential": true
}
]
}
off-note: It is very bad practice to post aws account_id :"{

Resources