Azure Connection String with Spring profile - azure

I am new to Azure Spring Boot, I have done working simple POC with using spring boot and azure with using connection-string in environment property.( https://learn.microsoft.com/en-us/azure/azure-app-configuration/quickstart-java-spring-app )
But I have the following requirements:
1) No need to set connection-string in environment property, in this case directly I want those connection-string bootstrap.yaml file.
Ex:(It is not working)
spring:
application:
name: azurespringappconfig
cloud:
azure:
appconfiguration:
stores:
- connection-string: 'Endpoint=https://azurespringconfig:azconfig:xxxxxxxxxxx'
# cloud:
# azure:
# appconfiguration:
# stores:
# - connection-string: ${CONNECTIONSTRING} -- this is working using environment
2) I need to apply profile concept: Here different profiles have different connection-string, so which profile active so going pick up the that connection-string.
Is any possibility to solve both the problems.
Thanks,
Amar

Related

How to include a health-check endpoint in a Azure Machine Learning Model deployed as an AKS Webservice?

When we deploy a model using Azure Machine Learning Service, we need a scoring script with at least two functions init which is run once when a model is first deployed and a run function which is run against any data we want to be scored.
The endpoint for this is: http://<IP-Address>/api/v1/service/<Webservice-Name>/score
Is there a way to include a health-check function in the deployed web service? Maybe adding another function healthcheck() and a corresponding endpoint http://<IP-Address>/api/v1/service/<Webservice-Name>/health which when pinged sends us a custom JSON containing information about the health of our service. This may even include information about the Azure Health Check too:
Apparently, the Azure Machine Learning SDK has to show the properties of the class Webservice. I haven't found any mention to this issue online.
To reproduce fully the experiment, you have to install AML SDK:
pip install azureml-core
And get the workspace:
# get the workspace
subscription_id = '<your_subscription_id>'
resource_group = '<your_resource_group>'
workspace_name = '<your_workspace_name>'
ws = Workspace(subscription_id, resource_group, workspace_name)
Using the Webservice Class, it should work. I have 3 services, but I can't see the status:
# this should work, but it isn't returning the state
for service in Webservice.list(workspace= ws):
print('Service Name: {} - {}'.format(service.name, service.state))
>> Service Name: akstest - None
>> Service Name: regression-model-1 - None
>> Service Name: xgboost-classification-1 - None
Using an intermediate step using the _get() method, it works:
# this works
for service in Webservice.list(workspace= ws):
service_properties = Webservice._get(workspace= ws, name = service.name)
print('Service Name: {} - {}'.format(service_properties['name'], service_properties['state']))
>> Service Name: akstest - Failed
>> Service Name: regression-model-1 - Healthy
>> Service Name: xgboost-classification-1 - Healthy

How to run a Node app (NextJS) on gcloud from github?

I have followed these steps:
I installed `Google Cloud Build app on Github, linked it to Cloud Build and configured it to use a certain repository (a private one)
I set up a trigger at Cloud Build: Push to any branch
the project has no app instances after deploying (App Engine -> Dashboard)
My cloudbuild.yarml looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--project=project-name', '--version=$SHORT_SHA']
If I try to run the trigger manually: I get this error in Google Cloud:
unable to get credentials for cloud build robot
I have also tried to set IAM roles based on this article but using #cloudbuild.gserviceaccount.com doesn't seem to be a valid "member" (perhaps I need two projects, one for running and one for building the app?)
How do I fill the gaps / fixes the errors mentioned?
It seems the error message looking for credential that has the required permission. From the article that you are following, in the step #4, don't add manually the Service Account for Cloud Build. Check if you enable the Cloud Build API in your project, if the API is disabled try to enable. It will automatically create the Service Account for Cloud Build and look likes this:
[PROJECT_NUMBER]#cloudbuild.gserviceaccount.com
Once the service account is created, go to Cloud Build > Setting page and enable the required roles for you application.

how to create azure batch pool based on custom VM image using java sdk

I want to use custom ubuntu VM image that I had created for by batch job. I can create a new pool by selecting the custom image from the azure portal itself but I wanted to write build script to do the same using the azure batch java sdk. This is what I was able to come up with:
List<NodeAgentSku> skus = client.accountOperations().listNodeAgentSkus().findAll({ it.osType() == OSType.LINUX })
String skuId = null
ImageReference imageRef = new ImageReference().withVirtualMachineImageId('/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Compute/images/$CUSTOM_VM_IMAGE_NAME')
for (NodeAgentSku sku : skus) {
for (ImageReference imgRef : sku.verifiedImageReferences()) {
if (imgRef.publisher().equalsIgnoreCase(osPublisher) && imgRef.offer().equalsIgnoreCase(osOffer) && imgRef.sku() == '18.04-LTS') {
skuId = sku.id()
break
}
}
}
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration()
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef)
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount)
But I am getting exception:
Caused by: com.microsoft.azure.batch.protocol.models.BatchErrorException: Status code 403, {
"odata.metadata":"https://analyticsbatch.eastus.batch.azure.com/$metadata#Microsoft.Azure.Batch.Protocol.Entities.Container.errors/#Element","code":"AuthenticationFailed","message":{
"lang":"en-US","value":"Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:bf9bf7fd-2ef5-497b-867c-858d081137e6\nTime:2019-04-17T23:08:17.7144177Z"
},"values":[
{
"key":"AuthenticationErrorDetail","value":"The specified type of authentication SharedKey is not allowed when external resources of type Compute are linked."
}
]
}
I definitely think the way I am getting the skuId is wrong. Since client.accountOperations().listNodeAgentSkus() does not list the custom image, I just thought of giving skuId based of the ubuntu version that I had used to create the custom image.
So what is the correct way to create pool using custom VM image for azure batch account using java sdk?
You must use Azure Active Directory credentials in order to create a pool with a custom image. It is in the prerequisites section of the Batch Custom Image doc.
This is a frequently asked question:
Custom Image under AzureBatch ImageReference class not working
Azure Batch Pool: How do I use a custom VM Image via Python?
Just shows as the error, you need to authenticate to Azure first and then you could create the pool with a custom image as you want.
First, you need an Azure Batch Account, you can create it in the Azure portal or using Azure CLI. Or you also can create the batch account through Java. See Manage the Azure Batch Account through Java.
Then I think you also need to authenticate to your batch account. There are two ways below:
Use the account name, key, and URL to create a BatchSharedKeyCredentials instance for authentication with the Azure Batch service. The BatchClient class is the simplest entry point for creating and interacting with Azure Batch objects.
BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(batchUri, batchAccount, batchKey);
BatchClient client = BatchClient.open(cred);
The other way is using AAD (Azure Active Directory) authentication to create the client. See this document for detail.
BatchApplicationTokenCredentials cred = new BatchApplicationTokenCredentials(batchEndpoint, clientId, applicationSecret, applicationDomain, null, null);
BatchClient client = BatchClient.open(cred);
Then you can create the pool with the custom as you want. Just like this:
System.out.println("Created a pool using an Azure Marketplace image.");
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration();
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef);
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount);
System.out.println("Created a Pool: " + poolId);
For more details, see Azure Batch Libraries for Java.

How to use Datadog agent in Azure App Service?

I'm running web apps as Docker containers in Azure App Service. I'd like to add Datadog agent to each container to, e.g., read the log files in the background and post them to Datadog log management. This is what I have tried:
1) Installing Datadog agent as extension as described in this post. This option does not seem to be available for App Service apps, only on VMs.
2) Using multi-container apps as described in this post. However, we have not found a simple way to integrate this with Azure DevOps release pipelines. I guess it might be possible to create a custom deployment task wrapping Azure CLI commands?
3) Including Datadog agent into our Dockerfiles by following how Datadog Dockerfiles are built. The process seems quite complicated and add lots of extra dependencies to our Dockerfile. We'd also not like to inherit our Dockerfiles from Datadog Dockerfile with FROM datadog/agent.
I'd assume this must be a pretty standard problem for Azure+Datadog users. Any ideas what's the cleanest option?
I doubt the Datadog agent will ever work on App Services web app as you do not have access to the running host, it was designed for VMs.
Have you tried this https://www.datadoghq.com/blog/azure-monitoring-enhancements/ ? They say they support AppServices
I have written a app service extension for sending Datadog APM metrics with .NET core and provided instructions for how to set it up here: https://github.com/payscale/datadog-app-service-extension
Let me know if you have any questions or if this doesn't apply to your situation.
Logs from App Services can also be sent to Blob storage and forwarded from there via an Azure Function. Unlike traces and custom metrics from App Services, this does not require a VM running the agent. Docs and code for the Function are available here:
https://github.com/DataDog/datadog-serverless-functions/tree/master/azure/blobs_logs_monitoring
If you want to use DataDog for logging from Azure Function of App Service you can use Serilog and DataDog Sink to the log files:
services
.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog(
new LoggerConfiguration()
.WriteTo.DatadogLogs(
apiKey: "REPLACE - DataDog API Key",
host: Environment.MachineName,
source: "REPLACE - Log-Source",
service: GetServiceName(),
configuration: new DatadogConfiguration(),
logLevel: LogEventLevel.Infomation
)
.CreateLogger())
);
Full source code and required NuGet packages are here:
To respond to your comment on wanting custom metrics, this is still possible without the agent at the same location. After installing the nuget package of datadog called statsdclient you can then configure it to send the custom metrics to an agent located elsewhere. Example below:
using StatsdClient;
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1", // Optional if DD_AGENT_HOST environment variable set
StatsdPort = 8125, // Optional; If not present takes the DD_DOGSTATSD_PORT environment variable value, else default is 8125
Prefix = "myTestApp", // Optional; by default no prefix will be prepended
ConstantTags = new string[1] { "myTag:myTestAppje" } // Optional
};
StatsdClient.DogStatsd.Configure(dogstatsdConfig);
StatsdClient.DogStatsd.Increment("fakeVisitorCountByTwo", 2); //Custom metric itself

Create VM and deploy application on to Azure VM via Ansible

I am new to Ansible Azure modules. What I need is the ability to create VM's(n) and deploy an application on all of them. From what I read online azure_rm_virtual machine can be used to create VM(Assuming vnet, subnet and other networking jazz is in place). I am trying to figure out how do I deploy(copy bits and run my app installer) my application onto the newly created VM's? Can my app deployment be part of the VM creation process? If so what are the options? I looked into other modules but couldn't find any relevant one's. Didn't find any on the azure documentation too. Thanks.
Use azure_rm_deployment module to create VMs.
Because you know what you are deploying, use azure_rm_networkinterface_facts to get the IP address of your VM.
Use add_host to create an ad-hoc inventory. This all with:
---
- name: Create VM
hosts: localhost
gather_facts: true
Once you have your inventory use:
- name: Run updates
hosts: myazureinventory
gather_facts: true
From here, you can install your software. Hope it helps.
The ansible role to deploy a VM is now here: azure-iaas. I couldn't find one called azure_rm_deployment in the ansible repo.

Resources