How do I set tags using rackspace-novaclient? - rackspace-cloud

How do you set tags for Rackspace servers using novaclient?
I see how to add it in the GUI:
However, I don't see a way when launched with nova. How do I set tags? If not via nova, is there API access to the tags?

Tags are not implemented in the API itself, so they're not accessible from a nova CLI command.
Metadata is available to set key=value identifiers on your servers, using the nova CLI. These won't appear as tags in your control panel though. Here's some info about setting metadata on your servers with the nova CLI.
nova meta <server> <action> <key=value> [<key=value> ...]
Set or Delete metadata on a server.
Positional arguments:
<server> Name or ID of server
<action> Actions: 'set' or 'delete'
<key=value> Metadata to set or delete (only key is necessary on delete)

Related

Get Azure Resource Details based on the Tag using Rest API

I can able to fetch the resource details using the Azure CLI command. However, is there a way to fetch the specific resource details using Azure API Management URL?
I have listed out the Azure API Management link to get the tag details. I need
Azure API Management to get the resource details via API. Please help me.
CLI command
(Get-AzResource -Tag #{ "ApplicationID"="XXX"})
Output
Name : sample-dev-func
ResourceGroupName : example-rg
ResourceType : Microsoft.Web/sites
Location : eastus
ResourceId : /subscriptions/xxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Web/sites/resourceName
etc..,
Azure Management API
https://management.azure.com/subscriptions/{subscriptionId}/tagNames?api-version=2021-04-01
However, is there a way to fetch the specific resource details using
Azure API Management URL?
Yes. You can use Resources - List REST API to get the list of resources. This API supports filtering by tag name/value.
From the same link:
Resources can be filtered by tag names and values. For example, to
filter for a tag name and value, use $filter=tagName eq 'tag1' and
tagValue eq 'Value1'. Note that when resources are filtered by tag
name and value, the original tags for each resource will not be
returned in the results. Any list of additional properties queried via
$expand may also not be compatible when filtering by tag names/values.
For tag names only, resources can be filtered by prefix using the
following syntax: $filter=startswith(tagName, 'depart'). This query
will return all resources with a tag name prefixed by the phrase
depart (i.e.department, departureDate, departureTime, etc.)
Tip: Azure CLI essentially is a wrapper of Azure REST API. If you want to see the actual request sent by the CLI command and the response received, simply execute your CLI command with --debug switch. You will get all the details.

How to hide Terraform "artifactory" backend credentials?

Terraform 1.0.x
It's my first time using an artifactory backend to store my state files. In my case it's a Nexus repository, and followed this article to set up the repository.
I have the following
terraform {
backend "artifactory" {
# URL of Nexus-OSS repository
url = "http://x.x.x:8081/repository/"
# Repository name (must be terraform)
repo = "terraform"
# Unique path for this particular plan
subpath = "exa-v30-01"
# Nexus-OSS creds (nust have r/w privs)
username = "user"
password = "password"
}
}
Since the backend configuration does not accept variables for the username and password key/value pairs, how can I hide the credentials so they're not in plain site when I store my files in our Git repo?
Check out the "Partial Configuration" section of the Backend Configuration documentation. You have three options:
Specify the credentials in a backend config file (that isn't kept in version control) and specify the -backend-config=PATH option when you run terraform init.
Specify the credentials in the command line using the -backend-config="KEY=VALUE" option when you run terraform init (in this case, you would run terraform init -backend-config="username=user" -backend-config="password=password").
Specify them interactively. If you just don't include them in the backend config block, and don't provide a file or CLI option for them, then Terraform should ask you to type them in on the command line.
For settings related to authentication or identifying the current user running Terraform, it's typically best to leave those unconfigured in the Terraform configuration and use the relevant system's normal out-of-band mechanisms for passing credentials.
For example, for s3 backend supports all of the same credentials sources that the AWS CLI does, so typically we just configure the AWS CLI with suitable credentials and let Terraform's backend pick up the same settings.
For systems that don't have a standard way to configure credentials out of band, the backends usually support environment variables as a Terraform-specific replacement. In the case of the artifactory backend it seems that it supports ARTIFACTORY_USERNAME and ARTIFACTORY_PASSWORD environment variables as the out-of-band credentials source, and so I would suggest setting those environment variables and then omitting username and password altogether in your backend configuration.
Note that this out-of-band credentials strategy is subtly different than using partial backend configuration. Anything you set as part of the backend configuration -- whether in a backend block in configuration or on the command line -- will be saved by Terraform into a local cache of your backend settings and into every plan file Terraform saves.
Partial backend configuration is therefore better suited to situations where the location of the state is configured systematically by some automation wrapper, and thus it's easier to set it on the command line than to generate a configuration file. In that case, it's beneficial to write out the location to the cached backend configuration so that you can be sure all future Terraform commands in that directory will use the same settings. It's not good for credentials and other sensitive information, because those can sometimes vary over time during your session and should ideally only be known temporarily in memory rather than saved as part of artifacts like the plan file.
Out-of-band mechanisms like environment variables and credentials files are handled directly by the backend itself and are not recorded directly by Terraform, and so they are a good fit for anything which is describing who is currently running Terraform, as opposed to where state snapshots will be saved.

Tag Azure DevOps Agents when deploying to an Environment VM Resource

I want to install an azure agent onto my VM and have it appear as an Environment resource as described here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops .
This works if you run the script interactively, however when I use --unattended (as described here: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#unattended-config ) there is no longer a way to specify tags. The --addDeploymentGroupTags option doesn't work with Environment agents.
How do I automate the adding of a VM as an environment resource with tags?
I had a look at the source code and figured out there is an undocumented way to do this. Just use the following commands:
--addvirtualmachineresourcetags --virtualmachineresourcetags "<tag>"
According to the official doc , the “interactive PS registration script” supports to add the environments tags. This document doesn’t mention adding tags in “Unattended config” mode.
You can add tags to the VM as part of the interactive PS registration script. You can also add or remove tags from the resource view by clicking on ... at the end of each VM resource on the Resources tab.
We could simply run .\config.cmd --help to check the help info of this command in PowerShell.
It only mentions how to add a “deploygroup tag” through an option. Not any info related to tag of VM resource in environment.
I'm afraid this is not available to add tags to Environment VM resource in “Unattended config” mode.

Azure API Management - how to enable HTTP/2 via PowerShell command?

I'm looking for the correct PowerShell syntaxt to enable HTTP/2 setting for an Azure API Management instance.
I'm assume somehow this (New-AzApiManagementSslSetting) one, but
a) what is the exact syntax and
b) can I also enable it for an exiting instance (like you can do in the portal)?
For some reason the PowerShell command did not work for me, so I did this with bash (and jq). First modify the customProperties and store it in a variable:
customPropertiesNew=$(az apim show -n $APIM_NAME -g $APIM_RG --query customProperties | jq '."Microsoft.WindowsAzure.ApiManagement.Gateway.Protocols.Server.Http2" = "True"')
Then apply your modified customProperties (this step can takes 7-14 minutes!):
az apim update -n $APIM_NAME -g $APIM_RG --set customProperties="$customPropertiesNew"
So, you wan to assign http2 to the front/backend vs at the ServerProtocol as shown in the docs? --- Why are you not just doing...
$enableHttp2 = #{'Http2' = 'True'}
Along with what Mohammad already pointed you to, which shows the Http2 setting on the -ServerProtocol parameter...
-ServerProtocol Server protocol settings like Http2. This parameter is optional.
Type: Hashtable Position: Named Default value: None Accept pipeline
input: False Accept wildcard characters: False
the settings docs, specifically say ...
Updated cmdlet New-AzApiManagement to manage ApiManagement service
Added support for the new 'Consumption' SKU
Added support to turn the 'EnableClientCertificate' flag on for 'Consumption' SKU
The new cmdlet New-AzApiManagementSslSetting allows configuring
'TLS/SSL' setting on the 'Backend' and 'Frontend'.
This can also be used to configure 'Ciphers' like '3DES' and 'ServerProtocols' like 'Http2' on the 'Frontend' of an ApiManagement service.
Added support for configuring the 'DeveloperPortal' hostname on ApiManagement service.
Updated cmdlets Get-AzApiManagementSsoToken to take 'PsApiManagement' object as input

How to use Cyber Duck CLI with a custom end point url

I am trying to use Cyberduck CLI to connect to an S3 compatible S3-compatible CEPH API by UKFast (https://www.ukfast.co.uk/cloud-storage.html). It has the same function as Amazon but uses a different url/ server obviously. The connection is via secret key and pass phrase the same as S3. Cyberduck CLI protocols are listed here: https://trac.cyberduck.io/wiki/help/en/howto/cli
I have tried using the below command the windows command prompt. The problem is that Cyberduck auto adds amazon AWS URL. So how do I use all the S3 options with a custom end point?
C:\> duck --list s3://< Host >/ -i < AccessKey > -p < Secret Key>
The s3:// scheme is reserved for AWS in Cyberduck CLI. If you want to connect to a third party services compatible with the S3 protocol you will need to create a custom connection profile. A connection is a XML property list .cyberduckprofile file that you install, providing another connection scheme. An example of such a profile is the Rackspace profile shipped within the application bundle in Profiles/Rackspace US.cyberduckprofile adding the rackspace:// scheme to connect to OpenStack Swift compatible Rackspace Cloud. You can download one of the other S3 profiles available and use it as a template. Make sure to change at least the Vendor key to the protocol scheme you want to use such as ukfast and put in the service endpoint of UKFast as the value for the Default Hostname key (Which corresponds to s3.amazonaws.com; I cannot find any documentation for the S3 endpoint of UKFast.
When done, verify the new protocol is listed in duck --help. You can then use the command
duck --list ukfast://bucket/ --username <AccessKey> --password <Secret Key>
to list files in a bucket.
You might also want to request UKFast to provide such a profile file for you and other users to make setup simpler. The same connection profile can also be used with Cyberduck.

Resources