I'm trying to create a postgresql database on azure with azure-cli. I set my default location with az configure --defaults location=WestEurope, then I created a resourcegroup, a vnet and a subnet. Now I want to create a flexible server for postgresql with
az postgres flexible-server create --name $SERVERNAME --vnet $VNET --subnet $DBSUBNET
--admin-user $DB_USER --admin-password $DB_PASSWORD --sku-name Standard_B1ms --tier Burstable --storage-size 1024 --tags "Billing=test" --version 13 --database-name $DB_NAME
but I get this error and the server is not created: The location of Vnet should be same as the location of the server.
My Vnet is obviously located in WestEurope, given my default location set before, and I can't understand how the location of any resource I create could be different from my default location. I even tried to add --location WestEurope to the command, but it produced the same result.
I tried to set the default location as West Europe through AZ CLI in Azure Cloud Shell:
az configure --defaults location=westeurope
To create the PostgreSQL Server and database in the specified default location:
az postgres flexible-server create --resource-group HariTestRG \
--name demoserver1205 --admin-user <username> --admin-password <your-password> \
--sku-name Standard_B1ms --tier Burstable --storage-size 1024 \
--tags "Billing=test" --version 13 \
--vnet myVnet --subnet mySubnet --database-name demopsqldb01 \
Result:
Private DNS Zone is created at the global level automatically by Azure while creating the Postgres SQL Server and the Virtual Network.
Note:
After executing the default location command, you need to execute the remaining CLI commands without any error.
If you got the error in between the execution of any other commands, then execute the default location config command again.
Without interactive activity, cloud shell time out after 20 minutes and it runs on a temporary host provided on a per-session basis.
Updated Answer:
Here I created the VNet and Subnet in the resource group. Created the PostgreSQL Server and database with the existing VNet and Subnet through AZ CLI (using the above CLI Cmdlets - worked successfully):
Result:
Related
I have created a Kubernetes cluster ( 1 master, 2 workers VMs) using kubeadm on Azure. Node type service is working as expected. But Load Balancer service type is not working.
I have created the public IP address in azure and attached this IP to the service. I could see IP Address is attached for the service but this IP address is not accessible from outside.
And I have created the load balancer in Azure and attached the load balancer public IP address to the service that I have created in azure. This option also didn't work.
Just curious to know how to configure Load Balancer Service type in azure VM.
I have tried with aks and it worked with out any issues.
• I would suggest you to please follow the steps as given by me below for creating an AKS cluster in Azure and attaching a load balancer to that AKS cluster with a public IP for the front end for it. The steps for doing the said should be as below: -
a) Firstly, execute the below command in Azure CLI in Azure BASH cloud shell. The below creates an AKS cluster with two nodes in it of type ‘Linux’ with a ‘Standard’ load balancer in the said resource group where the ‘VM set type’ should be set as ‘VirtualMachineScaleSets’ with the appropriate version of Kubernetes being specified in it: -
az aks create \
--resource-group <resource group name>\
--name <AKS cluster name> \
--vm-set-type <VMSS or Availability set> \
--node-count <node count> \
--generate-ssh-keys \
--kubernetes-version <version number> \
--load-balancer-sku <basic or standard SKU>
Sample command: -
az aks create \
--resource-group AKSrg \
--name AKStestcluster \
--vm-set-type VirtualMachineScaleSets \
--node-count 2 \
--generate-ssh-keys \
--kubernetes-version 1.16.8 \
--load-balancer-sku standard
I would suggest you to please use the below command to check the installed version of Kubernetes orchestrator in your Azure BASH cloud shell according to the region specified and use the appropriate version in the above command: -
az aks get-versions --location eastus --output table
Then, I would suggest you use the below command for getting credentials of the AKS cluster created: -
az aks get-credentials --resource-group <resource group name> --name <AKS cluster name>
b) Then execute the below command for getting the information of the created nodes: -
kubectl get nodes
Once the information is fetched, then load the appropriate ‘YAML’ files in the AKS cluster and apply them to be run as an application on them. Then, check the service state as below: -
kubectl get service <application service name> --watch
c) Then press ‘Ctrl+C’, after noting the public IP address of the load balancer. Then execute the below command for setting the managed outbound public IP for the AKS cluster and the configured load balancer: -
az aks update \
--resource-group myResourceGroup \
--name myAKSCluster \
--load-balancer-managed-outbound-ip-count 1 ’
This will ensure that the services running in the back end will have only one public IP in the front end. In this way, you will be able to create an AKS cluster with load balancer having a public IP address.
I could use the below to add a vnet rule in Azure MariaDB connection security page.
az mariadb server vnet-rule create \
--resource-group xxx \
--server-name xxx-mariaDB \
--name db-to-aks \
--subnet $SUBNET_ID \
--ignore-missing-endpoint
but how do I enable the 'allow access to Azure Services' option below with AZ CLI?
thanks!
We have tested this in our local environment, it is working fine.
You can use the below cmdlet which will create a new firewall rule on MariaDB Server & it will enable Allow access to Azure services.
az mariadb server firewall-rule create --resource-group '<RgName>' --server '<MariaDBServerName>' --name "AllowAllWindowsAzureIps" --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
Here is the Sample output for reference:
For more information, You can refer to this Azure documentation about the creation of firewall rules in the Azure Maria DB Server.
I am using Azure CLI version 2.34.1. I ran following commands to create a resource group and then a virtual machine. Note that I used options to delete relevant resources when the VM is deleted.
az group create --name myTestRG --location eastus
az vm create --resource-group myTestRG --name myTestWindows11VM --image MicrosoftWindowsDesktop:windows-11:win11-21h2-pro:22000.493.220201 --admin-username someusername --os-disk-delete-option delete --nic-delete-option delete
Later I deleted the VM using following command.
az vm delete --name MyTestWin11VM --resource-group myTestRG -y
However, when I browse to the portal, the resource group still showing following resources that are relevant to the VM.
What I may be doing wrong? Is there anyway to delete all resources associated to VM when I delete the virtual machine itself?
UPDATE ITS A BUG:
The way Azure works is to group resources in Resource Groups - its a mandatory field in all creation of services. Azure does this because many resources have dependencies, such as a VM with a NIC, VNet & NSG.
You can use this to your advantage and simply delete the Resource Group:
az group delete --name myTestRG
Azure will work out the dependency order, eg NSG, VNet, NIC, VM. You can read up on how it does the ordering: https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-cli
What happens if I have multiple VMs in a Resource Group and I only want to delete one?
There's 3 new options --os-disk-delete-option, --data-disk-delete-option, --nic-delete-option to support deleting VMs and dependencies:
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--public-ip-sku Standard \
--nic-delete-option delete \
--os-disk-delete-option delete \
--admin-username azureuser \
--generate-ssh-keys
Otherwise script the whole thing using Azure Resource Manager Templates (ARM Templates), or the new tool to generate ARM Templates called Bicep. It's worth continuing with raw CLI commands and delete dependencies in order. IF you get good with the CLI you end up with a library of commands that you can use with ARM templates.
We are actually deploying container to Azure using Azure CLI and the create command as specify the sample documentation below :
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
In this dosucmentation it is clearly specify from the sample command below that when the container and the Vnet/Subnet gets created, azure create for you a Network Profile Id ( that is need for yaml deplyoement)
az container create --name appcontainer --resource-group myResourceGroup --image mcr.microsoft.com/azuredocs/aci-helloworld --vnet aci-vnet --vnet-address-prefix 10.0.0.0/16 --subnet aci-subnet --subnet-address-prefix 10.0.0.0/24
After the container gets created successfully you are supposed to get Network profile name or ID, which you can obtain using "az network profile list"
Which in fact does not return anything
UPDATE :
I update m Azure CLI to 2.30 in powershell but the result is the same the output of the command return nohing even if container and vnet gets succesfully created
Output result
Thanks for your help
regards
I have tested in my environment.
I deployed a container to a new virtual network using the below command:
az container create --name appcontainer --resource-group myResourceGroup --image mcr.microsoft.com/azuredocs/aci-helloworld --vnet aci-vnet --vnet-address-prefix 10.0.0.0/16 --subnet aci-subnet --subnet-address-prefix 10.0.0.0/24
The container got successfully created.
To get the Network Profile ID, I used the below command:
az network profile list --resource-group myResourceGroup --query [0].id --output tsv
In this way, we can fetch the Network Profile ID
If network profile is not getting created using CLI, try using ARM template
The same happened to me. I solve it using Azure CLI version 2.27.2. Any newer version leaves me with the same problem.
There seems to be a problem with the latest versions of the Azure CLI
I created a virtual machine using the following command in azure cli.
az vm create --resource-group Test_Group --location eastus --name myVM2 --os-type linux --attach-os-disk disk1
I installed grub on disk1 and all necessary things for booting.I am facing an issue in connecting to the instance using ssh.