I'm using VS Code, and to get some intellisense when writing the azure cli script I've put everything in a .azcli file. My question now is how do I execute that file from a powershell terminal? Also, is it possible to use parameters in such a script like:
az servicebus topic create -g $resourceGroup -n $topicName --namespace-name $namespace
Is it possible to call a azcli file that looks like the one above and provice the $resourceGroup, $topicName, $namespace as argument from powershell?
I'm not aware of an easy way to do this in PowerShell. If someone knows, I would like to know as well.
If you have Windows Subsystem for Linux installed, you can run .azcli files just like .sh shell scripts inside WSL from PowerShell. WSL will need to have Azure CLI installed as well, so depending on the distribution you pick(Ubuntu or Debian are ussually safe), you will need to follow the instructions from Install the Azure CLI.
To run script in WSL from PowerShell terminal:
bash -c "./file.azcli"
Or directly in WSL terminal:
./file.azcli
You can use parameters in a .azcli file just like .sh shell scripts:
resourceGroup=MyRG
topicName=MyTopicName
namespace=myNameSpaceName
az servicebus topic create -g $resourceGroup -n $topicName --namespace-name $namespace
You could also create a .vscode/tasks.json, similar to what this GitHub issue recommends.
Related
i am new
I am trying to use local pc power shell to execute commands like on azure portal power shell.
enter image description here
in my local power shell
i can login to azure using az login.
az commands are running but i am trying to execute pg_dump --help but it say cannot recognize this command. but az commands are working fine.
enter image description here
on azure power shell cli
i can execute all the pg_dumb commands without issue.
please answer.
I think you need to install Azure CLI in order to use the same commands on local powershell:
https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli
you need to have also pg_dump and pg_restore command-line utilities installed.
pg_dump is not part of the Azure CLI but is a PostgreSQL client application.
Cloud Shell happens to have a lot of packages installed already (i.E. terraform) and apparently pg_dump.
If you want to use it locally on your client you might want to look at installing the corresponding client tools. Maybe this SO thread helps (if you are running Windows): How do I install just the client tools for PostgreSQL on Windows?
I recently encountered some problems when writing a script for Terrafrom automation.
In my case VM is using proxmox platform, not cloud platform
So I use Telmate/proxmox as my mod for creating VMs(CentOS7)
The VM builds smoothly, but when I want to customize the VM(CentOS7), there are some problems
There is an inline usage in terraform’s remote exec Provisioner.
According to the official documentation, this usage applies to line-by-line instructions
I followed this step and used it on my Provision script, the script did execute normally, and it also spawned the VM and executed the installation script.
The content of the install script is
yum -y install <something package>
install web service
copy web.conf, web program to /path/to/dir
restart web service
But the most important service is not up, but when I start the command in the script via SSH to the VM, the service is normal. That is, this cannot be achieved through terraform’s remote exec
So I want to ask if terraform is not suitable for customizing some services, such as web server, etc.? Only suitable for generating some resources such as VM?
And another custom script needs to be done using such as ansbile?
here is sample code
provisioner "remote-exec" {
inline = [
"yum -y install tar",
"tar -C / -xvf /tmp/product.tar",
"sh install.sh",
]
}
I found a way to understand this matter later, I am not sure if there is a problem with the program written by the developer or other reasons. Anyway, I can't enable the service (process) via script. But it is possible to enable the service by rebooting and using the built-in system service (systemctl).
Az CLI is working on windows cmd prompt but they i am trying to execute the same on powershell or ISE it is giving above message and then execute the command.
I guess you need to install
https://learn.microsoft.com/en-us/powershell/azure/new-azureps-module-az?view=azps-4.5.0&viewFallbackFrom=azps-2.6.0
as i see Azure CLI is already installed.
I'm using Azure CLI interactive mode az interactive to run below command.
az ml folder attach -w yhd-mlws -g yhd-mlws-rg
It prompts me with below error message.
az: error: unrecognized arguments: -w yhd-mlws -g yhd-mlws-rg
BTW, both my Machine Learning workspace yhd-mlws and resource group yhd-mlws-rg had been created in my Azure subscription. Azure CLI extension for machine learning service had also been installed via az extension add -n azure-cli-ml.
Then I run command az ml folder attach without any argument. I get bellow error message.
Message: Error, default workspace not set and workspace name parameter not provided.
Please set a default workspace using "az ml folder attach -w myworkspace -g myresourcegroup" or provide a value for the workspace name parameter.
The command window exit the interactive mode after above error message. Then I try the command az ml folder attach -w yhd-mlws -g yhd-mlws-rg again, bingo! It works.
Here comes my question, does azure-cli-ml extension support Azure CLI interactive mode? You know, Azure CLI interactive mode is amazing and I want to use it whenever possible. Thanks!
BTW, I'm running windows command window in Windows Server 2016 Datcenter. Azure-cli version is 2.0.79.
I can reproduce your issue, the interactive mode should support the azure-cli-ml extension, because when I run az ml workspace list, it works, once I pass the -g parameter, it gives the same error, maybe it is a bug, but I am not sure, the interactive is in preview currently.
If you want to run az ml folder attach -w yhd-mlws -g yhd-mlws-rg in the interactive mode, my workaround is to pass the #, i.e. # az ml folder attach -w yhd-mlws -g yhd-mlws-rg.
I am trying to create a secret scope in a Databricks notebook. The notebook is running using a cluster created by my company's admin - I don't have access to create or edit clusters. I'm following the instructions in the Databricks user notebooks (https://docs.databricks.com/user-guide/secrets/example-secret-workflow.html#example-secret-workflow) but get an error:
/bin/bash: databricks: command not found
Below is the code I've tried that returns the error:
%sh -e
databricks secrets create-scope --scope scopename
sh% is used so I can run the command line language in the notebook. I've tried using
%sh
and also
%sh -e
no luck.
I should be able to create a secret scope using this code but have had no luck. Any suggestions on the cause of this? Has anyone else had the same issue?
I've not heard of running the CLI from the cluster before. Even if it is installed I doubt it is configured.
You can download the CLI and run it from your local machine: https://docs.databricks.com/user-guide/dev-tools/databricks-cli.html
You will need to be running Python locally. If you prefer there is also a PowerShell command-line (disclaimer I produced this): https://github.com/DataThirstLtd/azure.databricks.cicd.tools
Databricks clusters don't have databricks-cli installed by default. That doesn't mean you can't install it on the cluster. You can install databricks-cli using the following command in any databricks notebook:
%sh
/databricks/python/bin/pip install databricks-cli==0.9.1
Logging in may be a problem as you can't send responses using shell scripts within the notebooks. You can create the .databrickscfg file in the clusters root directory using the following set of commmands:
%sh
> ~/.databrickscfg
echo "[DEFAULT]" >> ~/.databrickscfg
echo "host = <your host>" >> ~/.databrickscfg
echo "token = <your token>" >> ~/.databrickscfg
You can save these commands as shell scripts that can be run automatically on cluster start up (init scripts).
I faced same issue with notebook.
If you have to run any databricks cli commands on your databricks instance, easiest way should be to use Web terminal.
You can launch web terminal from compute->Clusters->Apps->Launch Web Terminal
If not installed , you can use pip install databricks-cli
Configure user through command databricks configure or databricks
configure --token
Now you are good to run databricks cli commands
Here's a sample run on databricks web terminal which worked for me:
One other reason the ( /bin/bash: databricks: command not found) can happen that I noticed on my mac that is not listed here is the user path not exported. add this to your bash profile file or just run the command : export PATH="......(path to your python library)/Library/Python/3.9/bin"