Google Deployment Manager API - Updating yaml properties dynamically while creating the VM - node.js

We are using Deployment Manager API to create VMs in our NodeJS application.
config.deploymentConfiguration.target.config.content = fs.readFileSync(yamlFile,config.encoding);
var request = {
project: config.projectId,
resource: config.deploymentConfiguration
};
Here, I want to dynamically update the yaml properties before calling the create VM code.
deploymentManager.deployments.insert(request, function(err, response){..});
Please suggest the best way to do this.

Related

How to get Azure App Configuration feature flag value list in bicep template

I would like to get list of already created feature flags from Azure App Configuration in bicep template. I want to pass it to separate bicep file that will use union function on existing and new feature flags to not override already existing ones.
Simillar thing I'm already using for Web App and list() function get existing app settings:
module appConfig './webappsettings.bicep' = {
name: '${deployment().name}-appSettings'
params: {
webAppName: webapp.name
currentAppSettings: list('${webapp.id}/config/appsettings', '2021-03-01').properties
appSettings: allSettings
}
}
How can I achieve something similar for Azure App Configuration to get key values of feature flags?
I tried with below solution but I only got key values of App Configuration
resource configurationStore 'Microsoft.AppConfiguration/configurationStores#2021-10-01-preview' existing = {
name: 'appcfg'
}
module configStoreKeyValues 'inner.bicep' = {
name: 'config-store'
params: {
existingKeyValues: configurationStore.listKeys().value
keyValues: keyValues
contentType: contentType
}
}
using same list() function or listKeys()
list('${configurationStore.id}/keyValues','2021-10-01-preview').properties
I'm getting an error:
Status Message: The resource namespace 'subscriptions' is invalid. (Code:InvalidResourceNamespace)
The "List" operation of key-values is not supported by the control-plane REST API in App Configuration. The listKeys API you used above returns the "Access keys", not the key-value configuration data you are looking for. You can create/update/read individual key-value, feature flag, Key Vault reference as KeyValues resource using Bicep. Feature flag is a special key-value with certain key prefix and content type. Below is an example of feature flag using the ARM template, but it should give you an idea of how to do the same in Bicep.
https://azure.microsoft.com/resources/templates/app-configuration-store-ff/
Note that the "List" operation of key-values is supported in the data-plane REST API of App Configuration. Besides the REST API, it's also accessible via Azure CLI, Azure portal, and App Configuration SDKs programmatically.

Azure Bicep - Connect Azure API Management (API) to Azure Function App

I can see within the Azure Management Console, specifically within the Azure API Management Service, via the GUI you are able to use Azure Functions to form an API.
I am trying to implement the same via Azure Bicep, but I do not see any options in the Bicep documentation for API Management - API Service.
In the GUI, I see something like this:
This allows me to specify my Function App:
However, within the Bicep Documentation, I don't see anything where I would expect to: Microsoft.ApiManagement service/apis
I have instead tried using the Microsoft.ApiManagement service/backends but that doesn't give the same experience and I haven't managed to get that to work.
So my question is, how do I connect my Azure API Management service to an Azure Site (app) which is set as a suite of Azure Functions?
You need to create backend and all api definitions manually. The portal gives you a nice creator and does all those REST calls for you. With bicep (and ARM) which is operating directly on the REST endpoints of each resource provider you need to build own solution.
Perhaps there’re somewhere some existing templates that can do this but personally I didn’t see any yet.
I added OpenAPI specifications to my functionApps to produce the sawgger \ -openAPI link (or file). Then leveraged the OpenAPI file to build the APIs.
// Create APIM Service
resource apimServiceRes 'Microsoft.ApiManagement/service#2021-08-01' = {
name: 'apim service name'
location: resourceGroup().location
sku:{
capacity: 0
name: 'select a sku'
}
identity:{
type: 'SystemAssigned'
}
properties:{
publisherName: 'your info'
publisherEmail: 'your info'
}
}
// Create the API Operations with:
resource apimApisRes 'Microsoft.ApiManagement/service/apis#2021-08-01' = {
name: '${apimServiceRes.name}/name-to-represent-your-api-set'
properties: {
format: 'openapi-link'
value: 'https://link to your swagger file'
path: ''
}
}

Create Azure Devops environment from script

I would like to create an Azure DevOps Pipeline Environment from Powershell.
Using Azure CLI or the Azure REST API however, I can not find any information on this.
There are some notions about the environments in the release but that's not what I need.
When using the portal following URL is called "/_apis/distributedtask/environments" but can't find any information about this REST API endpoint.
Does anyone know how to automate this?
You're right, If I check the network section when I create a new environment I can see it uses this api:
https://dev.azure.com/{org}/{project}/_apis/distributedtask/environments
With this JSON body:
{
"description":"",
"name":"test"
}
I don't see it domunetd but it should work :)

how to create azure batch pool based on custom VM image using java sdk

I want to use custom ubuntu VM image that I had created for by batch job. I can create a new pool by selecting the custom image from the azure portal itself but I wanted to write build script to do the same using the azure batch java sdk. This is what I was able to come up with:
List<NodeAgentSku> skus = client.accountOperations().listNodeAgentSkus().findAll({ it.osType() == OSType.LINUX })
String skuId = null
ImageReference imageRef = new ImageReference().withVirtualMachineImageId('/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Compute/images/$CUSTOM_VM_IMAGE_NAME')
for (NodeAgentSku sku : skus) {
for (ImageReference imgRef : sku.verifiedImageReferences()) {
if (imgRef.publisher().equalsIgnoreCase(osPublisher) && imgRef.offer().equalsIgnoreCase(osOffer) && imgRef.sku() == '18.04-LTS') {
skuId = sku.id()
break
}
}
}
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration()
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef)
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount)
But I am getting exception:
Caused by: com.microsoft.azure.batch.protocol.models.BatchErrorException: Status code 403, {
"odata.metadata":"https://analyticsbatch.eastus.batch.azure.com/$metadata#Microsoft.Azure.Batch.Protocol.Entities.Container.errors/#Element","code":"AuthenticationFailed","message":{
"lang":"en-US","value":"Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:bf9bf7fd-2ef5-497b-867c-858d081137e6\nTime:2019-04-17T23:08:17.7144177Z"
},"values":[
{
"key":"AuthenticationErrorDetail","value":"The specified type of authentication SharedKey is not allowed when external resources of type Compute are linked."
}
]
}
I definitely think the way I am getting the skuId is wrong. Since client.accountOperations().listNodeAgentSkus() does not list the custom image, I just thought of giving skuId based of the ubuntu version that I had used to create the custom image.
So what is the correct way to create pool using custom VM image for azure batch account using java sdk?
You must use Azure Active Directory credentials in order to create a pool with a custom image. It is in the prerequisites section of the Batch Custom Image doc.
This is a frequently asked question:
Custom Image under AzureBatch ImageReference class not working
Azure Batch Pool: How do I use a custom VM Image via Python?
Just shows as the error, you need to authenticate to Azure first and then you could create the pool with a custom image as you want.
First, you need an Azure Batch Account, you can create it in the Azure portal or using Azure CLI. Or you also can create the batch account through Java. See Manage the Azure Batch Account through Java.
Then I think you also need to authenticate to your batch account. There are two ways below:
Use the account name, key, and URL to create a BatchSharedKeyCredentials instance for authentication with the Azure Batch service. The BatchClient class is the simplest entry point for creating and interacting with Azure Batch objects.
BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(batchUri, batchAccount, batchKey);
BatchClient client = BatchClient.open(cred);
The other way is using AAD (Azure Active Directory) authentication to create the client. See this document for detail.
BatchApplicationTokenCredentials cred = new BatchApplicationTokenCredentials(batchEndpoint, clientId, applicationSecret, applicationDomain, null, null);
BatchClient client = BatchClient.open(cred);
Then you can create the pool with the custom as you want. Just like this:
System.out.println("Created a pool using an Azure Marketplace image.");
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration();
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef);
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount);
System.out.println("Created a Pool: " + poolId);
For more details, see Azure Batch Libraries for Java.

How to use Datadog agent in Azure App Service?

I'm running web apps as Docker containers in Azure App Service. I'd like to add Datadog agent to each container to, e.g., read the log files in the background and post them to Datadog log management. This is what I have tried:
1) Installing Datadog agent as extension as described in this post. This option does not seem to be available for App Service apps, only on VMs.
2) Using multi-container apps as described in this post. However, we have not found a simple way to integrate this with Azure DevOps release pipelines. I guess it might be possible to create a custom deployment task wrapping Azure CLI commands?
3) Including Datadog agent into our Dockerfiles by following how Datadog Dockerfiles are built. The process seems quite complicated and add lots of extra dependencies to our Dockerfile. We'd also not like to inherit our Dockerfiles from Datadog Dockerfile with FROM datadog/agent.
I'd assume this must be a pretty standard problem for Azure+Datadog users. Any ideas what's the cleanest option?
I doubt the Datadog agent will ever work on App Services web app as you do not have access to the running host, it was designed for VMs.
Have you tried this https://www.datadoghq.com/blog/azure-monitoring-enhancements/ ? They say they support AppServices
I have written a app service extension for sending Datadog APM metrics with .NET core and provided instructions for how to set it up here: https://github.com/payscale/datadog-app-service-extension
Let me know if you have any questions or if this doesn't apply to your situation.
Logs from App Services can also be sent to Blob storage and forwarded from there via an Azure Function. Unlike traces and custom metrics from App Services, this does not require a VM running the agent. Docs and code for the Function are available here:
https://github.com/DataDog/datadog-serverless-functions/tree/master/azure/blobs_logs_monitoring
If you want to use DataDog for logging from Azure Function of App Service you can use Serilog and DataDog Sink to the log files:
services
.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog(
new LoggerConfiguration()
.WriteTo.DatadogLogs(
apiKey: "REPLACE - DataDog API Key",
host: Environment.MachineName,
source: "REPLACE - Log-Source",
service: GetServiceName(),
configuration: new DatadogConfiguration(),
logLevel: LogEventLevel.Infomation
)
.CreateLogger())
);
Full source code and required NuGet packages are here:
To respond to your comment on wanting custom metrics, this is still possible without the agent at the same location. After installing the nuget package of datadog called statsdclient you can then configure it to send the custom metrics to an agent located elsewhere. Example below:
using StatsdClient;
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1", // Optional if DD_AGENT_HOST environment variable set
StatsdPort = 8125, // Optional; If not present takes the DD_DOGSTATSD_PORT environment variable value, else default is 8125
Prefix = "myTestApp", // Optional; by default no prefix will be prepended
ConstantTags = new string[1] { "myTag:myTestAppje" } // Optional
};
StatsdClient.DogStatsd.Configure(dogstatsdConfig);
StatsdClient.DogStatsd.Increment("fakeVisitorCountByTwo", 2); //Custom metric itself

Resources