Azure CDN purge command issue - azure

I have created a CDN named 'cdn-profile', in resource group 'rgDev' . In the CDN the endpoint created is 'webqa.azureedge.net'. Inside that I have created a custom domain 'qa.example.com'. I wanted to purge the CDN cache. Below is the command I run.
$ResourceGroupName='rgDev'
$EndpointName='qa.example.com'
$ProfileName='cdn-profile'
$CDNEndPointName='webqa.azureedge.net'
az cdn endpoint purge -g $ResourceGroupName -n $EndpointName --profile-name $ProfileName --content-paths '/*' --name $CDNEndPointName
This gives me below error:
Endpoint(s) not found. Please verify the resource(s), group or it's parent resources exist.
What am I missing here?

In the Azure CLI command az cdn endpoint purge, the name of the CDN endpoint is webqa(It is the name of the resource that type is Endpoint in your resource group.) instead of the hostname webqa.azureedge.net or qa.example.com.
You should use it like this:
$ResourceGroupName='rgDev'
$ProfileName='cdn-profile'
$CDNEndPointName='webqa'
az cdn endpoint purge -g $ResourceGroupName --profile-name $ProfileName --content-paths '/*' --name $CDNEndPointName

Related

Problem with Azure in Microsoft learning path module (Kubernetes)

I am just doing this module of Microsoft course:
https://learn.microsoft.com/en-us/learn/modules/microservices-aspnet-core/
I created an azure subscription and tried to run the script given in unit 2.
Something is going on in the console, but at some point it shows something like this:
Getting credentials for AKS...
(ResourceNotFound) The Resource 'Microsoft.ContainerService/managedClusters/eshop-learn-aks' under resource group 'eshop-learn-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
Code: ResourceNotFound
Message: The Resource 'Microsoft.ContainerService/managedClusters/eshop-learn-aks' under resource group 'eshop-learn-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix
Installing NGINX ingress controller
error: You must be logged in to the server (the server has asked for the client to provide credentials)
error: You must be logged in to the server (the server has asked for the client to provide credentials)
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Getting load balancer public IP
> kubectl get svc -n ingress-nginx -o json | jq -r -e '.items[0].status.loadBalancer.ingress[0].ip // empty'
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Waiting for load balancer IP...
Am I doing something wrong? Strictly followed instructions.
Edit:
I think problem is with VM, not AKS.
> az aks create -n eshop-learn-aks -g eshop-learn-rg --node-count 1 --node-vm-size Standard_D2_v5 --vm-set-type VirtualMachineScaleSets -l centralus --enable-managed-identity --generate-ssh-keys -o json
ERROR: (BadRequest) The VM size of AgentPoolProfile:nodepool1 is not allowed in your subscription in location 'centralus'.
You need to log in :
az login
az account set --subscription <YOUR SUB ID>
az aks get-credentials --resource-group <AKS RG> --name <AKS NAME>
The 'CentralUS' location doesn't accept new VM with the type of subscription you have.
You need to use another location.
To do that, you need to declare a variable 'defaultRegion' in the bash shell (ex.: declare defaultRegion=eastus) before executing wget

Binding SSL certificate to App service using Azure CLI and Keyvault

I'm trying to use Azure CLI to configure an Azure app service SSL certificates that are stored in an Azure KeyVault. I'm new to Azure CLI and am having trouble finding a complete set of sample code that does this. I've found documentation / examples of the individual commands, but am having trouble chaining them together. Definitely would appreciate some assitance/guidance as I feel like this is a common scenario.
At first, I thought this was would be a simple 'linking' type command. Certs are already uploaded in keyvault, so Azure App Service, go get 'em, here's the $pfxPassword.
It doesn't look like that is possible. I found some documentation that it looks like you need to download the Cert from the keyvault and then upload it.
It took me a little bit to realize that you don't use az keyvault certificate for this... you need to use az secret download.
I then found some other commands on how to upload the cert, get the thumbprint, and bind the cert to the app Service.
I chained these three commands together, but am not able to get it to work.
#download the cert
az keyvault secret download --file $fileName --vault-name $vaultName --name $certName;
#upload the cert and get the thumbprint
$thumbprint=az webapp config ssl upload --certificate-file $fileName --certificate-password $pfxPassword --name $site_name --resource-group $ResourceGroupName --query thumbprint --output tsv
#bind the uploaded cert to the app service.
az webapp config ssl bind --certificate-thumbprint $thumbprint --ssl-type SNI --name $site_name --resource-group $ResourceGroupName
I can confirm the first command is downloading the cert. (After a while I was able to figure things out import into my win10 development machine -- even though the certs were uploaded into keyvault with a password, downloading them stripped the password out.).
Unfortunately, it looks like the second command (upload and get the thumbprint) REQUIRES a password.
What is the 'correct' way to do this?
Thanks for your guidance/advice.
According to my test, when we use the Azure CLI to download the certificate as pfx file from Azure key vault, it has a blank password. So when we use CLI to upload the pfx file to Azure web app, we can use the following command
az webapp config ssl upload --certificate-file "<pfx file name>" --name "<web name>" --resource-group "<group name>" --certificate-password "" --query thumbprint --output tsv
az login
# upload certificate to Azure key vault
az keyvault certificate import --file "E:\Cert\P2SChildCert.pfx" --password "" --name "test1234" --vault-name "testkey08"
# download certificate as pfx file
az keyvault secret download --file "test2.pfx" --vault-name "testkey08" --name "test1234" --encoding base64
# upload the pfx file to Azue web app
az webapp config ssl upload --certificate-file "test2.pfx" --name "andywebsite" --resource-group "andywebbot" --certificate-password "" --query thumbprint --output tsv
Besides, if your certificate has been stored in Azure key vault, we can directly import it to Azure web app via Azure Portal.

Not able to purge Azure CDN Endpoint using Azure CLI

I'm using Azure CLI to purge all the contents from Azure CDN endpoint. I got a reference from Microsoft Docs: https://learn.microsoft.com/en-us/cli/azure/cdn/endpoint?view=azure-cli-latest
I'm trying exactly same commands with proper params but it says - "Endpoint(s) not found. Please verify the resource(s), group or it's parent resources exist."
az cdn endpoint purge -g <my-resource-group> --profile-name \
<name-of-cdn-profile> --content-paths '/*' --name <cdn-endpoint-name>
-renders the above error
however, I can see the CDN endpoint when I issue the list command:
az cdn endpoint list -g <my-resource-group> --profile-name <cd>n-profile-name>
the above command works fine and returns the endpoint which I'm trying to purge
Anyone having a similar experience?
TIA!
I can produce this error, please check the parameter --name <cdn-endpoint-name>, It should not have a suffix like .azureedge.net. The endpoint name is the name of the resource which type is Endpoint in your resource group.
With above options I was unable to fix it and then I realized a small issue! Below is the option with double quotes rather single quotes fixed it for me!
az cdn profile list
az cdn endpoint purge --resource-group {rg_name} --name {cdn_name} --profile-name {cdn_profile_name} --content-paths "/*"

HTTPS access to Azure ubuntu Virtual Machine

I have a Ubuntu VM running on Microsoft azure.
Currently I can access it using HTTP, but not with HTTPS.
In the network interface, inbound port rule, 443 is already allowed.
I already added a certificate into the VM, by creating a key vault and a certificate, prepare it for deployment following this documentation:
az keyvault update -n <keyvaultname> -g <resourcegroupname> --set properties.enabledForDeployment=true
then added the certificate following this answer.
In Azure CLI:
$secret=$(az keyvault secret list-versions \
--vault-name <keyvaultname> \
--name <certname> \
--query "[?attributes.enabled].id" --output tsv)
$vm_secret=$(az vm secret format --secret "$secret")
az vm update -n <vmname> -g <keyvaultname> --set osProfile.secrets="$vm_secret"
I got the following error:
Unable to build a model: Cannot deserialize as [VaultSecretGroup] an object of type <class 'str'>, DeserializationError: Cannot deserialize as [VaultSecretGroup] an object of type <class 'str'>
However, when I do az vm show -g <resourcegroupname> -n <vmname> after that, in the osProfile, the secrets already contained the secret I added
"secrets": [
{
"sourceVault": {
"id": "/subscriptions/<subsID>/resourceGroups/<resourcegroupName>/providers/Microsoft.KeyVault/vaults/sit-key-vault"
},
"vaultCertificates": [
{
"certificateStore": null,
"certificateUrl": "https://<keyvaultname>.vault.azure.net/secrets/<certname>/<certhash>"
}
]
}
],
When accessing using HTTPS, I failed. I can access it using HTTP but chrome still shows the "Not secure" mark next to the address.
What did I miss?
I also checked answer from similar question, but could not find "Enable Direct Server Return" anywhere in the VM control panel page.
As far as I known, we can follow these following steps to configure SSL for nginx server.
Add SSl cert
$secret=$(az keyvault secret list-versions --vault-name "keyvault_name" --name "cert name" --query "[?attributes.enabled].id" --output tsv)
$vm_secret=$(az vm secret format --secrets "$secret")
az vm update -n “VM name” -g “resource group name” --set osProfile.secrets="$vm_secret"
Install Nginx
sudo apt-get update
sudo apt-get install nginx
Configure SSL Cert
#get cert name
find /var/lib/waagent/ -name "*.prv" | cut -c -57
#paste cert
mkdir /etc/nginx/ssl
cp “your cert name” /etc/nginx/ssl/mycert.cer
cp “your cert name” /etc/nginx/ssl/mycert.prv
#change nginx configuration file
sudo nano /etc/nginx/sites-available/default
PS: add the next content in the file
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mycert.cert;
ssl_certificate_key /etc/nginx/ssl/mycert.prv;
}
service nginx restart

Add SSL Cert to an existing VM linux vm from Azure key vault

How you add SSL Cert to an existing azure Linux VM from Azure Key vault. for windows we use the following command
$vaultId=(Get-AzureRmKeyVault -ResourceGroupName $resourceGroup -VaultName $keyVaultName).ResourceId
$vm = Add-AzureRmVMSecret -VM $vm -SourceVaultId $vaultId -CertificateStore "My" -CertificateUrl $certURL
Is there a similar one like this for linux vm? Is there a link similar to this for linux Secure IIS web server with SSL certificates on a Windows virtual machine in Azure
You could use Azure Cli to do this. Using following command.
secret=$(az keyvault secret list-versions \
--vault-name $keyvault_name \
--name mycert \
--query "[?attributes.enabled].id" --output tsv)
vm_secret=$(az vm format-secret --secret "$secret")
az vm update -n shui -g shuikeyvault --set osProfile.secrets="$vm_secret"
Then the certificate stores on /var/lib/waagent, you could use Azure Custom Script to use it.
Note: You should use "$vm_secret", I test in my lab, only $vm_secret does not work for me.
ssh-copy-id -i ~/.ssh/id_rsa.pub aht#myserver. But if you have rights to the VM but not the original key, you want to use azure vm reset-access to do so. It is in fact documented as a standalone ability:
help: -M, --ssh-key-file path to public key PEM file or SSH Public key file for SSH authentication (valid only when os-type is "Linux")
of course, it doesn't say what ELSE should happen here in order to ADD the key I provide to the currently running VM I'm targeting. But the result needs to be that if I specify a user that already exists, and there's a key already there, this one needs to be added to the directory.
You'll note that in Azure/azure-linux-extensions#295, https://github.com/Azure/azure-linux-extensions/issues/295 believes that using azure vm set-extensions ,then reset-access is broken.
Update a Key Vault for use with VMs
Set the deployment policy on an existing key vault with az keyvault update. The following updates the key vault named myKeyVault in the myResourceGroup resource group:
Azure CLI
Copy
az keyvault update -n myKeyVault -g myResourceGroup --set properties.enabledForDeployment=true

Resources