Is there a way to list out my datastores if I've deployed to a VNET? - azure-machine-learning-service

I followed the instructions in the MSFT Docs, but now I can't list our my Datastores either via the SDK nor the Azure Machine Learning studio.
Instead, in the studio I see this:
Is there a way to make this work? Did I miss a step?

Is it a blob datastore or file store? We only support blob store behind vnet right now. Can you also check whether you grant your own machine permission to access the storage account inside vnet? in the firewall rule for your storage account, make sure your machine ip is granted to access the storage account.
Let me know how it works.

Can you share how you created your workspace, set up vnet for your workspace storage account?
I did the following and am able to see my datastore list via both SDK and UI
created workspace
put my workspace storage account behind vnet
go to studio, datastores: no problem seeing the list of my datastores, including the ones behind vnet
in notebook, call workspace.datastores: no problem seeing the list of my datastores, including the ones behind vnet

Related

Azure ML Workspace is not able to upload data in for workspace linked storage account which is behind VNet

I am trying to setup an Azure ML workspace with Storage Accound behind the Vnet but when trying to upload an sata from Data tab I am getting below error.
I have all the necessary setting as describe in the below article but still no luck
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-secure-workspace-vnet?tabs=se%2Ccli#secure-the-workspace-with-private-endpoint
Things did to make Storage Accessible to ML workspace
Enabled Azure.Storage Service Endpoint from Vnet
Done this setting on Storage
Both the ML Workspace and Storage are in same subnet
Assigned "Storage Blob Data Reader" permission for Worspace
Accessing this ML Workspace from Virtual machine created on same subnet.
Can anyone suggest is there anything missing ?
I tried to reproduce the same in my environment and got below results:
I have one storage account in which I enabled Networking settings as below:
When I tried to upload data from Azure Machine Learning Studio, I got same error as you like below:
To resolve the error, make sure to add your client IP address under storage account's Firewall settings like below:
Now, I tried to upload data again from Azure Machine Learning Studio and got results like below:
When I selected Next, it took me to Review tab like below:
After clicking on Create, data asset is created successfully with below details:
To confirm that, I checked the same in Portal where test.txt file uploaded to storage account successfully like below:
In your case, make sure to add your client IP address under storage account's firewall settings. If the error still persists, try with Storage Blob Data Contributor role.
Reference:
Connect to data storage with the studio UI - Azure Machine Learning | Microsoft Learn
I have created a private endpoint for the storage account in the same subnet of the workspace and it started working.
Still thinking about why with Service Endpoint it's not working. Is there any configurations I am missing.

Azure Storage Account Firewall Permissions for Vulnerability Assessment

I have created a storage account for use in storing the results of an Azure Vulnerability Assessment on an Azure SQL Database.
If the firewall on the storage account is disabled, allowing access from all networks, Azure Vulnerability Scans work as expected.
If the firewall is enabled, the Azure Vulnerability Scan on the SQL Database reports an error, saying the storage account is not valid or does not exist.
Checking the box for "Allow Azure services on the trusted services list to access this storage account." in Networking properties for the storage account does not work to resolve this issue, though it is the recommended step in the documentation here: https://learn.microsoft.com/en-us/azure/azure-sql/database/sql-database-vulnerability-assessment-storage
Allow Azure Services
What other steps could resolve this issue, rather than just disabling the firewall?
You have to add the subnet and vnet that is being used by the SQL Managed Instance as mentioned in the document you are following . You can refer the below screenshot:
After enabling the service endpoint status as shown in the above image , Click Add . After adding the vnet it should look like below:
After this is done , Click on save and you should be able to resolve the issue.
Reference:
Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets - Azure SQL Database | Microsoft Docs

Azure Storage Account: Firewall and virtual networks

I have enabled Virtual Network and Firewall access restrictions for Azure Storage Account, but faced the issue, that I do not have an access to Storage Account from Azure Functions(ASE environment), despite fact that ASE public address is added as exception. Additionaly, I have added all environment's virtual networks just to make sure.
Is there any way to check from which address functions/other services is trying to get an access to storage account?
Also, I have a tick "Allow trusted Microsoft services to access this storage account
". I'm not sure what is included into "trusted Microsoft services".
In the Application Insight Functions logs, only timeout issue appears, without additional explanation.
Could you please help me to understand how to properly configure storage account access restriction?
Have a look of this doc:
https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security#trusted-microsoft-services
From your description, I think you dont give a RBAC role to your azure function to access the storage.
Do this steps:
If you need more operation. Like do something with the data. Do need to add more RBAC roles, have a look of this offcial doc to learn more about RBAC roles:
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#all

How to assign existing NIC/Vnet to VM on Azure portal while creating a VM?

When creating a VM, as we can see in the image below, we cannot choose an existing VNet. The solution I found is to create using ARM template and specify an existing VM. More over it is stated "When creating a virtual machine, a network interface will be created for you."
Is there a better way to do this on portal? (even though it is a combo box, we really cannot select an existing vnet)
Why is Azure not allowing it when the same functionality is available for storage accounts (choosing existing network)?
Unfortunately, you cannot select the existing NIC in the way you used as the screenshot shows, but you can select the existing Vnet in the same region. As you see, Azure will create the nic itself for you.
In the Azure portal, you can only use the template to create a VM with both existing NIC and Vnet as I know. And you can also achieve what you want using the Azure CLI locally. Also does the PowerShell and REST API and etc.

Create Classic VM from VHD in Azure

I have a server in Classic ASM which I want to move to another subscription.
I tried using the move feature on the portal but it throws error that target subscription is not empty.
Are there any way that I can recreate the classic VM from its VHDs and also migrate its IP address to the new Subscription.
I don't want the Generalise the VM.
Thanks.
I tried using the move feature on the portal but it throws error that target subscription is not empty.
If you want to move ASM resources to a new subscription, you should note:
All classic resources in the subscription must be moved in the same operation.
The target subscription must not contain any other classic resources.
The move can only be requested through a separate REST API for classic moves. The standard Resource Manager move commands don't work when moving classic resources to a new subscription.
To move classic resources to a new subscription, use the REST operations that are specific to classic resources, refer to this link.
Besides, if it is possible, I recommend you to use ARM nowadays, Microsoft released a new management model called Azure Resource Manager (ARM) which provides many new capabilities to manage, and control Azure resources. You could migrate the VM from ASM to ARM, refer to this link. And move it to another subscription, refer to this link.
The client does not plan to migrate to ARM currently.
To migrate the VM to another subscription, I followed the below steps and it worked well.
1- Copy the VM's VHDs to the destination storage account.
2- Create VM Image (Classic) using the VHD which was copied and then create VM out of it.
3- Unfortunately, I was unable to find a way to migrate the public IP address to another subscription.

Resources