Delete and recreate the registry for an azure machine learning workspace - azure-machine-learning-service

Our azure machine learning workspace container registry has grown extremely large (4Tb) and has many obsolete entries. I would like to delete the registry and simply create a new one. We do not need any entries from the old one.
If I delete the current registry, create a new one, how do I attach it to the workspace? I dont want to create a new workspace.

Attaching a registry to an existing workspace can be done with this command (replace placeholders with your own values):
az ml workspace update -n [ML workspace name] -g [Resource group name] --container-registry [ACR ID] -u

Related

Azure container copy only changes

I would like to update static website assets from github repos. The documentation suggests to use an action based on
az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s .
If I see this correct, this copies all files regardless of the changes. Even if only one file was altered. Is it possible to only transfer files that have been changed? Like rsync does.
Else I would try to judge the changed files based on the git history and only transfer them. Please also answer, if you know an existing solution in this direction.
You can use azcopy sync to achieve that. That is a different tool, though.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs-synchronize?toc=/azure/storage/blobs/toc.json
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-sync
Based on the sugggestion by #4c74356b41, I discovered that the mentioned tool was recently integrated into the az tool.
It can be used the same way as az storage blob upload-batch. The base command is:
az storage blob sync

Import terraform workspaces from S3 remote state

I am using terraform to deploy to multiple AWS accounts and each account with its own set of environments. I'm using terraform workspaces and s3 remote state. When I switch between these accounts my terraform workspace list is empty now for one of the accounts. Is there a way to sync the state of workspace from the s3 remote state?
Please advise.
Thanks,
I have tried to create the workspace but when I run terraform plan it does create all the resources even though they exists already in the remote state.
I managed to fix it using the following:
I created the new namespaces manually using terraform workspace command
terraform workspace new dev
Created and switched to workspace "dev"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
I went to S3 where I have the remote state and now under the environment dev I have duplicate states.
I copied the state from the old folder key and added to the new folder key (using copy/paste) in S3 console window
IN dynamo db lock state I have duplicate id of LockID for my environment with different digests. I had to copy the Digest of the old entry and replace the digest for the new entry. After that when I run terraform plan everything went smoothly and I had to repeat the same process for all the environments.
I hope this helps anyone else having the same use case.
Thanks,

Azure back up unable to delete backup items

I used the Azure Backup client (MARS) to back up a server he had. The server no longer exists. In the Azure portal I am unable to delete the vault because the resource group contains backup items.
I tried using Powershell but Az.RecoveryServices is not meant to be used for MARS BackupManagementType. You can Get-AzureRmRecoveryServicesBackupContainer but then Get-AzureRmRecoveryServicesBackupItem fails because there is no WorkLoadType for MARS
So I cant delete the backup items from the Portal. I cant delete backup Items using powershell and the server no longer exists so I can use the MARS agent to delete items.
You can't delete a Recovery Services vault that has servers registered in it, or that holds backup data.
To gracefully delete a vault, unregister servers it contains, remove vault data, and then delete the vault.
If you try to delete a vault that still has dependencies, an error message is issued, and you will need to manually remove the vault dependencies, including:
Backed up items
Protected servers
Backup management servers (Azure Backup Server, DPM)
Refer to this article for detailed info:https://learn.microsoft.com/en-us/azure/backup/backup-azure-delete-vault
Note: You can use Cloud Shell available in portal to achieve this. Please select PowerShell after you launch Cloud Shell.
Kindly let us know if the above helps or you need further assistance on this issue.

Deleting Perforce Workspace from cloned VM

Does deleting a workspace only affect local files in perforce, or is there some bookkeeping kept by the perforce server? I've cloned a VM and now have two of the same workspaces on two separate machines, but want to remove one from one machine but not the other. How can this be done?
Deleting a workspace deletes it from the server, so you definitely don't want to do that.
In your new VM, create a new workspace. If you want it to be exactly the same as the original VM, create the new workspace by using the original workspace as a template. From P4V, right-click on the workspace from the Workspace view and choose "Create/Update workspace from originalworkspace"
Or, from the command line:
p4 client -o -t originalworkspace mynewworkspace

How to clean an Azure storage Blob container?

I just want to clean (dump, zap, del .) an Azure Blob container. How can I do that?
Note: The container is used by IIS (running Webrole) logs (wad-iis-logfiles).
A one liner using the Azure CLI 2.0:
az storage blob delete-batch --account-name <storage_account_name> --source <container_name>
Substitute <storage_account_name> and <container_name> by the appropriate values in your case.
You can see the help of the command by running:
az storage blob delete-batch -h
There is only one way to bulk delete blobs and that is by deleting the entire container. As you've said there is a delay between deleting the container and when you can use that container name again.
Your only other choice is to delete the one at a time. If you can do the deleting from the same data centre where the blobs are stored it will be faster than running the delete locally. This probably means writing code (or you could RDP into one of your instances and install cloud explorer). If you're writing code then you can speed up the overall process by deleting the items in parallel. Something similar to this would work:
Parallel.ForEach(myCloudBlobClient.GetContainerReference(myContainerName).ListBlobs(), x => ((CloudBlob) x).Delete());
Update: Easier way to do it now (in 2018) is to use the Azure CLI. Check joanlofe's answer :)
Easiest way to do it in 2016 is using Microsoft Azure Storage Explorer IMO.
Download Azure Storage Explorer and install it
Sign in with the appropriate Microsoft Account
Browse to the container you want to empty
Click on the Select All button
Click on the Delete button
Try using cloudberry product for windows azure
this is the link: http://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx
you can search in the blob for specific extension. select multiple blobs and delete them
If you mean you want to delete a container. I would like to suggest you to check http://msdn.microsoft.com/en-us/library/windowsazure/dd179408.aspx to see if Delete Container operation (The container and any blobs contained within it are later deleted during garbage collection) could fulfill the requirement.
If you are interested in a CLI way, then the following piece of code will help you out:
for i in `az storage blob list -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table | awk {'print $1'} | sed '1,2d' | sed '/^$/d'`; do az storage blob delete --name $i -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table; done
It first fetches the list of blobs in the container and deletes them one by one.
If you are using a spark (HDInsight) cluster which has access to that storage account, then you can use HDFS commands on the command line;
hdfs dfs -rm -r wasbs://container_name#account_name.blob.core.windows.net/path_goes_here
The real benefit is that the cluster is unlikely to go down, and if you have screen running on it, then you won't lose your session whilst you delete away.
For This case the better option is to identify the list of item found in the container. then delete each item from the container. That is the best option. If you delete the container you should have a run time error on the next time...
You can use Cloud Combine to delete all the blobs in your Azure container.

Resources