I have a VM in Azure. Is there a way I can navigate the c: drive of the machine from within Azure portal and not rdp?
You could try to use Azure Bastion service:
https://learn.microsoft.com/en-us/azure/bastion/bastion-create-host-portal
Or Run Command:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/run-command
There is no way to do this. If you are looking for a shortcut, instead of connecting to RDP, it is not possible.
You can use Azure File storage instead if you need similar functionality.
Related
When creating a VM, as we can see in the image below, we cannot choose an existing VNet. The solution I found is to create using ARM template and specify an existing VM. More over it is stated "When creating a virtual machine, a network interface will be created for you."
Is there a better way to do this on portal? (even though it is a combo box, we really cannot select an existing vnet)
Why is Azure not allowing it when the same functionality is available for storage accounts (choosing existing network)?
Unfortunately, you cannot select the existing NIC in the way you used as the screenshot shows, but you can select the existing Vnet in the same region. As you see, Azure will create the nic itself for you.
In the Azure portal, you can only use the template to create a VM with both existing NIC and Vnet as I know. And you can also achieve what you want using the Azure CLI locally. Also does the PowerShell and REST API and etc.
I followed the instructions in the MSFT Docs, but now I can't list our my Datastores either via the SDK nor the Azure Machine Learning studio.
Instead, in the studio I see this:
Is there a way to make this work? Did I miss a step?
Is it a blob datastore or file store? We only support blob store behind vnet right now. Can you also check whether you grant your own machine permission to access the storage account inside vnet? in the firewall rule for your storage account, make sure your machine ip is granted to access the storage account.
Let me know how it works.
Can you share how you created your workspace, set up vnet for your workspace storage account?
I did the following and am able to see my datastore list via both SDK and UI
created workspace
put my workspace storage account behind vnet
go to studio, datastores: no problem seeing the list of my datastores, including the ones behind vnet
in notebook, call workspace.datastores: no problem seeing the list of my datastores, including the ones behind vnet
I want to give access to somebody to a virtual machine on Azure (with the RDP connection file) and let him start/stop the VM but without giving him access to the Azure Portal account.
Is there a (simple) way to start/stop virtual machine on Azure without having to access the portal ? By "simple" I mean something that you don't need to run some line code and that can be as easy as opening a RDP file.
Alternatively, is there a way not to be billed of a running (but idle) virtual machine ?
You can use a Powershell script to start/stop the VM.
No you will be billed for the stopped VMs also as long as you have checked them out.
For your requirement, I think the best way is to use the Service principal with the role of Virtual Machine Contributor. It just lets the user manage the VM, but without access to the VM and also do not have the permission to access the Azure portal.
Then you can use this service principal to execute the Azure CLI, Azure PowerShell commands or the REST API to start/stop the Azure VM, it does not cost.
Is there a (simple) way to start/stop virtual machine on Azure without having to access the portal ? By "simple" I mean something that you don't need to run some line code and that can be as easy as opening a RDP file.
There are SDK's available which you can use along with CLI to have programmatic access to the VM - in this manner, you do not require access to the portal.
Alternatively, is there a way not to be billed of running (but idle) virtual machine?
You will be billed for idle VM as this still means that VM is operational. To save costs, terminate the VM and create the VM again when required.
Is this even possible? I have a couple web apps and a couple of Azure Functions running under the same App Service Plan. I'd like to (ideally) have them use a specific Storage plan, so I can keep everything in one place. I envision them in different containers under the same plan.
If that's not possible...then where are the files? Are they on the storage that's built into the App Service Plan itself? If so, can I connect to this somehow, so I can manage the files through something like Storage Explorer?
Today when playing with the Azure Az Powershell tool I found I was able to provision a Function App without a Azure Storage back-end. This cannot be done via the UI. An easy way to provision a Function App with a storage account backend is by leveraging the Azure UI for provisioning.
When a Function App is provisioned via command line, the bits seem to be stored within the function app itself. There is an FTP URL given if you download the publish profile. The files can be read and written to using an FTP tool like WinSCP (as alternative to Kudu)
I'd like to (ideally) have them use a specific Storage plan, so I can keep everything in one place. I envision them in different containers under the same plan. If that's not possible...then where are the files?
Every Azure Web App has a home directory stored/backed by Azure Storage. More detail info please refer to Azure WebApp sandbox. It is owned by Azure WebApp Service, we are not able to choose Azure Storage to setup WebApp by ourselves currently. But we could config storage account for Azure WebApp Diagnostic logs.
Are they on the storage that's built into the App Service Plan itself? If so, can I connect to this somehow, so I can manage the files through something like Storage Explorer?
Different WebApp Service Plan has different volume of the storage. We could use Kudu tool (https://yoursite.scm.azurewebsites.net) to manage the files. More detail info about Kudu please refer to the document.
Update:
We could access the home directory with the Kudu tool. More details please refer to the snapshoot
We're trying to get the cpu percentage,disk read throughput,etc programmatically using powershell commands for metrics in azure but we are not able to get any of the commands according to new release.
First of all are you trying to get performance details from Web/Worker Role or New preview release of Windows Azure Virtual Machines?
With Windows Azure Virtual Machine:
You have full access to your Azure VM and configure it the way you would do in any remote VM and the get the performance data out of it. With Windows Azure Virtual Machine if you want to get Performance data from Powershell you would need to do the following:
Configure to Azure VM to have PowerShell Remote Access
Configure Azure VM port settings so you can connect from on-premise machine (this is must and you should know that open port will open connection to VM outside)
Configure Azure VM to collect performance data
Connect from your on-premise machine using PowerShell and collect performance data
You can find several resources on Internet to do above task.
With Web/Worker Role:
Even when you are using new Powershell cmdlets with Windows Azure, older commands are still accessible and working as expected. To get Performance metrics from Azure Here are some resources for your to try:
Windows Azure Diagnostics and PowerShell – Performance Counters:
Part 1 | Part 2
How To Easily Enable Windows Azure
Diagnostics Remotely