I'm currently attempting to create a release pipeline to a service fabric cluster.
The goal of the pipeline is to take a built artefact and publish it to a service fabric cluster which it does successfully.
I am looking to add in a manual intervention step which will notify the user of the name of the SF cluster they are attempting to deploy to.
How can I do this? there does not seem to be a way to access the name of the cluster. using the predefined variable
$(Parameters.serviceConnectionName)
Will print the ID of the connection, rather than its actual name
using the predefined variable
$(Parameters.serviceConnectionName)
Will print the ID of the connection, rather than its actual name
I do not see any predefined variable that named "Parameters.serviceConnectionName" in the following documentations about Azure DevOps. Where did you find this variable?
Use predefined variables
Classic release and artifacts variables
To get the name of service fabric cluster, maybe you can try to check if there is any specific command-line or API on the service fabric. If yes, you can run the related command-line or API and get the service fabric cluster from the output.
Related
I am relatively new to learning ADF; while creating linked-service for 'blob' data store, with the default settings:
'using connection string' for authentication type, at the end of creation step, I got the following recommendation:
Linked service will be published immediately
As Data Factory cannot store credentials in a Git repository, this change will be published immediately.
This may cause issues on the Master branch and on published resources that depend on this linked service. To avoid immediately publish of linked services, we recommend using Azure Key Vault.
I have attached screenshot of the recommendation to this post.
My concern is, what should be the ideal approach?
Further, if I publish the created 'linked service' directly with connection string as authentication type, how do I use it to run and test the pipeline? As of now, I haven't run a pipeline yet; everything I have created so far, I did it in Git-Repository mode of ADF.
Would anyone please help me guide through the process and best practice?
Thank you for giving your valuable time and support.
In Azure it is best to leverage managed identity wherever possible rather than having credentials stored in key vault as it adds to another security and maintainence layer
I'm using the Azure DevOps REST API to retrieve pipeline runs (aka "builds"). The build response has a bunch of good data, but it seems that the pool it reports only applies if the overall pipeline has a top-level pool defined.
For example, I have a pipeline that runs several parallel jobs, each one in a different self-hosted agent pool. But when I retrieve a build of this pipeline using the REST API, the only data available is for the pipeline's pool, which is the normal Hosted Ubuntu 1604 response you get for Microsoft-hosted builds - there's no mention of any of the self-hosted agent pools that did all the work.
I've tried drilling down into different sections (including the stage and task queries). The task level will eventually show the name of the agent used, but it's just a string, so it's not easy to infer the agent pool used unless you happen to name your agents in a specific way.
Is there any way to drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on, using the REST API?
The Build response contains a link timeline (under Links field).
You can form it like this: https://dev.azure.com/<org>/<project>/_apis/build/builds/<buildId>/Timeline
The Json response has useful info for each job/task including queueId
at present, our Rest API cannot help us drill down into the individual "jobs" that ran as part of a pipeline and see what agent pools they were run on. This Rest Api is used to find the default settings about your pipeline.
As the work around, we can use the api: Builds - Get Build Log, and find the agent's name, like this:
I'm using a PHP APP, BoxBilling. It takes orders from final users, these orders need to be processed into actual nodes and containers.
I was planning on using Terraform as the provisioner for both, containers whenever there is room available in existing nodes or new nodes whenever the existing ones are full.
Terraform would interface with my provider for creating new nodes and with Vagrant for configuring containers.
Vagrant would interface with Kubernetes to provision the pods/containers.
Question is: Is there an inbound Terraform API that I can use to send orders to Terraform from the BoxBilling APP?
I've searched the documentation, examples and case studies but it's eluding me...
Thank you!
You could orchestrate the provisioning of infrastructure and/or configuration of nodes using an orchestration/CI tool such as Jenkins.
Jenkins has a Remote Access API which could be called to trigger a set of steps which could include Terraform plan, apply, creation of new workspaces etc and then downstream to configuration, testing and anything else in your toolchain.
Is there a way to shorten the process? Should I have two service fabric clusters if we want to implement continuous delivery process ?
If the Service Fabric cluster deployment (i.e. creation of a Service Fabric cluster) is stuck - open an issue in the Azure Portal with support to help get it resolved.
For application deployment you don't need separate cluster to do CD. Depending on your CD strategy (e.g. rolling upgrades, rip and replace, blue/green), there are various ways of doing that in Service Fabric. Take a look here for some of the conceptual documentation on this topic: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade
What is the difference between updating an deployment and deleting and then creating a new deployment for a cloud service ?
We have a cloud service set up which during deployment, first deletes the existing deployment in staging and then creates a new deployment. Due to this the VIP for staging is always changing. We have a requirement where we want to make sure that both the PROD and Staging VIP always remains same.
Before changing the deployment option i would like to know what is the real difference and the need to have these two options.
I tried to search but found nothing on this.
EDIT: In the Azure Pub XML, we have a node named 'AzureDeploymentReplacementMethod' and the different options for this field are 'createanddelete', 'automaticupgrade' and 'blastupgrade'
Right now we are using 'createanddelete' and we are interested to use blastupgrade.
Any help would be much appreciated.
THanks,
Javed
When you use Create&Delete deployment the process simply deletes an existing deployment, then creates new one.
The other two options do upgrade deployment. The difference between automaticupdate and blastupgrade are in the value of Mode element of the Upgrade Deployment operation. As their name suggests, automaticupdate sends Auto for that element. While blastupdate would send Simultaneous. As per documentation:
Mode Required. Specifies the type of update to initiate. Role instances are allocated to update domains when the service is
deployed. Updates can be initiated manually in each update domain or
initiated automatically in all update domains. Possible values are:
Auto
Manual
Simultaneous
If not specified, the default
value is Auto. If set to Manual, WalkUpgradeDomain must be called to
apply the update. If set to Auto, the update is automatically applied
to each update domain in sequence. The Simultaneous setting is only
available in version 2012-12-01 or higher.
You can read more on Update Cloud Service here.
Although, if you really want to persist VIP in all situations, I would suggest you to:
Do not use staging for cloud services at all - just use two separate cloud services (one for production and one for staging)
use the Reserved IP Address feature of the Azure Platform.