Azure, deleting group resurce error - azure

As stated in the azure support page, I'm writing on Stack Overflow to find a solution to my issue, but for me this looks a little bit off topic...
When I'm trying to delete a group resource via linux terminal I get:
Delete resource group Default-Storage-WestEurope? [y/n] y
+ Deleting resource group Default-Storage-WestEurope
error: Long running operation failed with error: 'Invalid status code with response body "{"Error":{"Code":"ResourceGroupDeletionBlocked","Target":null,"Message":"Deletion of resource group 'Default-Storage-WestEurope' failed as resources with identifiers 'Microsoft.ClassicStorage/storageAccounts/bitnamiwesteuropecfuropu' could not be deleted. The provisioning state of the resource group will be rolled back. The tracking Id is 'f791a8f0-a28a-4fe3-b491-c6251b51d987'. Please check audit logs for more details.","Details":[{"Code":null,"Target":"/subscriptions/5fdcf34e-ecda-408e-b3ba-e706ac34dba6/resourceGroups/Default-Storage-WestEurope/providers/Microsoft.ClassicStorage/storageAccounts/bitnamiwesteuropecfuropu","Message":"{\"error\":{\"code\":\"StorageAccountOperationFailed\",\"message\":\"Unable to delete storage account 'bitnamiwesteuropecfuropu': 'Storage account bitnamiwesteuropecfuropu has some active image(s) and/or disk(s), e.g. bitnami-bitnami-redis-3.2.1-0-westeurope-CfuROpU. Ensure these image(s) and/or disk(s) are removed before deleting this storage account.'.\"}}","Details":null}]}}" occurred when polling for operation status.'.
info: Error information has been recorded to /home/giumbai/.azure/azure.err
error: group delete command failed
Edit: So I've made some progress, but still not enough. So i have a blob that has an image with an lease, in order to bake the lease i used this command: azure storage blob lease break -a bitnamiwesteuropecfuropu -k <my key> then i was prompted to insert the container name and blob name.
But didn't worked, i get this error, that i don't really understand:
{ ArgumentNullError: Required argument blob for function _leaseImpl is not defined
<<< async stack >>>
at throwMissingArgument (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:417:9)
at ArgumentValidator._.extend.exists (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:447:7)
at ArgumentValidator._.extend.string (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:426:10)
at /usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4661:9
at Object.validateArgs (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:495:3)
at Object.BlobService._leaseImpl (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4660:14)
at Object.BlobService.breakLease (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:1253:8)
at Function.Object.defineProperty.value (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/builtins.js:367:19)
at __1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:423:41)
at StorageUtil_performStorageOperation__1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:421:5)
at StorageUtil_breakLease__10 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:1609:31)
at breakLease (/usr/lib/node_modules/azure-cli/lib/commands/storage/storage.blob.js:817:17)
at breakBlobLease (/usr/lib/node_modules/azure-cli/lib/commands/storage/storage.blob.js:802:5)
<<< raw stack >>>
at throwMissingArgument (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:417:9)
at ArgumentValidator._.extend.exists (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:447:7)
at ArgumentValidator._.extend.string (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:426:10)
at /usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4661:9
at Object.validateArgs (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:495:3)
at Object.BlobService._leaseImpl (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4660:14)
at Object.BlobService.breakLease (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:1253:8)
at Function.Object.defineProperty.value (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/builtins.js:367:19)
at __$__1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:423:41)
at __func (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/runtime.js:47:5)
stack: [Getter/Setter],
name: 'ArgumentNullError',
argumentName: 'blob',
message: 'Required argument blob for function _leaseImpl is not defined',
__frame:
{ name: 'StorageUtil_performStorageOperation__1',
line: 402,
file: '/usr/lib/node_modules/azure-cli/lib/util/storage.util.js',
prev:
{ name: 'StorageUtil_breakLease__10',
line: 1598,
file: '/usr/lib/node_modules/azure-cli/lib/util/storage.util.js',
prev: [Object],
calls: 3,
active: false,
offset: 11,
col: 30 },
calls: 1,
active: false,
offset: 19,
col: 4 },
rawStack: [Getter] }
ArgumentNullError: Required argument blob for function _leaseImpl is not defined
<<< async stack >>>
at throwMissingArgument (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:417:9)
at ArgumentValidator._.extend.exists (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:447:7)
at ArgumentValidator._.extend.string (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:426:10)
at /usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4661:9
at Object.validateArgs (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:495:3)
at Object.BlobService._leaseImpl (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4660:14)
at Object.BlobService.breakLease (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:1253:8)
at Function.Object.defineProperty.value (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/builtins.js:367:19)
at __1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:423:41)
at StorageUtil_performStorageOperation__1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:421:5)
at StorageUtil_breakLease__10 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:1609:31)
at breakLease (/usr/lib/node_modules/azure-cli/lib/commands/storage/storage.blob.js:817:17)
at breakBlobLease (/usr/lib/node_modules/azure-cli/lib/commands/storage/storage.blob.js:802:5)
<<< raw stack >>>
at throwMissingArgument (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:417:9)
at ArgumentValidator._.extend.exists (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:447:7)
at ArgumentValidator._.extend.string (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:426:10)
at /usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4661:9
at Object.validateArgs (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/common/util/validate.js:495:3)
at Object.BlobService._leaseImpl (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:4660:14)
at Object.BlobService.breakLease (/usr/lib/node_modules/azure-cli/node_modules/azure-storage/lib/services/blob/blobservice.js:1253:8)
at Function.Object.defineProperty.value (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/builtins.js:367:19)
at __$__1 (/usr/lib/node_modules/azure-cli/lib/util/storage.util.js:423:41)
at __func (/usr/lib/node_modules/azure-cli/node_modules/streamline/lib/callbacks/runtime.js:47:5)
Edit2: Interesting i managed to delete the remaining image i had to brake the lease on the image and on the container, so both the image and container are now deleted, but when i try to delete the empty storage i get:
Failed to delete storage account 'bitnamiwesteuropecfuropu'. Unable to delete storage account 'bitnamiwesteuropecfuropu': 'Storage account bitnamiwesteuropecfuropu has some active image(s) and/or disk(s), e.g. bitnami-bitnami-redis-3.2.1-0-westeurope-CfuROpU. Ensure these image(s) and/or disk(s) are removed before deleting this storage account.'.
Proof :)
Successfully deleted blob 'bitnami-images/bitnami-bitnami-redis-3.2.1-0-westeurope-CfuROpU'.

You should not threat to leave the service for ever this will not get your answers any faster. If you read the error message it is pretty clear on that what it is happening. You are trying to delete a storage account that has a disk that it is attached to a machine. You cannot delete a storage account with a disk that is with a running machine. Go to the portal and check the storage account you will find that the storage account has a file inside it. If you click on this vcs file you are going to see that the state of this file it is locked and the lease has infinity duration. Checking on your account you até going to find a machine that has a disk file on this storage account. Delete the virtual machine and delete the storage account after the lease will be released when you delete the storage account.

Related

Overwriting a file in Azure blob storage using the AzureFileCopy task

In Azure Pipelines, we can upload a file to blob storage using the following task:
- task: AzureFileCopy#4
inputs:
SourcePath: 'MyInstaller.tar.gz'
azureSubscription: 'Azure subscription 1(qwerty)'
Destination: 'AzureBlob'
storage: 'qwertyuiop'
ContainerName: 'qwertyuiop'
I noticed that if the file name is the same, then the file gets overwritten in the container, which is useful, because I can just make my live release pipeline overwrite the file so that users always get the latest version of my software.
However, what happens if a user is currently in the middle of downloading the file and it gets overwritten with a new version? Will the download fail? Is there any documentation from Microsoft on this?
For block blobs, download may be the older version until the final block commit. Final PutBlockList operation will atomically update the blob to the new version.
For large blobs it may take time to complete the process and so if there is no version change it downloads as is.When there is version change , if any type of concurrency is not set, it will overwrite the data parallely by default and if the version changes and the compatibility doesn’t match there may be internal errors occurring else it downloads.
Please have a look at the reference document on managing concurrency in azure storage which says overwiting depends on access condition.
The Storage service assigns an identifier to every object stored which
gets updated every time any change or update happens on that object.
The identifier is returned to the client as part of an HTTP GET
response using the ETag (entity tag) header that is defined within the
HTTP protocol. A user performing an update on such an object can send
in the original ETag along with a conditional header to ensure that an
update will only occur if a certain condition has been met . The
condition is in the form of “If-Match” header which requires to match
Etag in update request as in the Storage Service .If certain
conditions doesn’t match , the process may get interrupted and return
errors .
Reference:
azure-blob-availability-during-an-overwrite-SO

Reading data from Azure Blob Storage into Azure Databricks using /mnt/

I've successfully mounted my blob storage to Databricks, and can see the defined mount point when running dbutils.fs.ls("/mnt/"). This has size=0 - it's not clear if this is expected or not.
When I try and run dbutils.fs.ls("/mnt/<mount-name>"), I get this error:
java.io.FileNotFoundException: / is not found
When I try and write a simple file to my mounted blob with dbutils.fs.put("/mnt/<mount-name>/1.txt", "Hello, World!", True), I get the following error (shortened for readability):
ExecutionError: An error occurred while calling z:com.databricks.backend.daemon.dbutils.FSUtils.put. : shaded.databricks.org.apache.hadoop.fs.azure.AzureException: java.util.NoSuchElementException: An error occurred while enumerating the result, check the original exception for details.
...
Caused by: com.microsoft.azure.storage.StorageException: The specified resource does not exist.
All the data is in the root of the Blob container, so I have not defined any folder structures in the dbutils.fs.mount code.
thinking emoji
The solution here is making sure you are using the 'correct' part of your Shared Access Signature (SAS). When the SAS is generated, you'll find there are lots of different parts of it that you can use - it's likely sent to you as one long connection string, e.g:
BlobEndpoint=https://<storage-account>.blob.core.windows.net/;QueueEndpoint=https://<storage-account>.queue.core.windows.net/;FileEndpoint=https://<storage-account>.file.core.windows.net/;TableEndpoint=https://<storage-account>.table.core.windows.net/;SharedAccessSignature=sv=<date>&ss=nwrt&srt=sco&sp=rsdgrtp&se=<datetime>&st=<datetime>&spr=https&sig=<long-string>
When you define your mount point, use the value of the SharedAccessSignature key, e.g:
sv=<date>&ss=nwrt&srt=sco&sp=rsdgrtp&se=<datetime>&st=<datetime>&spr=https&sig=<long-string>

How do I get the correct path to a folder of an Azure container?

I'm trying to read files from an Azure storage account. In particular, I'd like to read all files contained in a certain folder, for example:
lines = sc.textFile('/path_to_azure_folder/*')
I am not quite sure what the path should be. I tried with the URL service blob endpoint, from Azure, followed by the folder path (I tried with both http and https):
lines = sc.textFile('https://container_name.blob.core.windows.net/path_to_folder/*')
and did not work:
diagnostics: Application XXXXXX failed 5 times due to AM Container for
XXXXXXXX exited with exitCode: 1 Diagnostics: Exception from
container-launch. Container id: XXXXXXXXX Exit code: 1
the URL I provided is the same I'm getting with CyberDuck App, when I click on 'Info'.
Your path should look like this
lines = sc.textFile("wasb://containerName#$storageAccountName.blob.core.windows.net/folder_path/*")
This should solve your your issue.
If you are trying to read all the blobs in an Azure Storage account, you might want to look into the tools and libraries we offer for retrieving and manipulating your data. Getting started doc here.
Hope this is helpful!

Azure copy blob to another account: invalid blob type

I want to copy a 12GB page blob from one storage account to another. At the moment, both sides are "public container". But it doesn't work: HTTP/1.1 409 The blob type is invalid for this operation.
Copying it the same way but within the same storage account works without errors.
What am I missing?
Thanks!
//EDIT: This is how I'm trying to copy blob.dat from account1 to account2 (casablanca lib):
http_client client(L"https://account2.blob.core.windows.net");
http_request request(methods::PUT);
request.headers().add(L"Authorization", L"SharedKey account2:*************************************");
request.headers().add(L"x-ms-copy-source", L"http://account1.blob.core.windows.net/dir/blob.dat");
request.headers().add(L"x-ms-date", L"Sat, 23 Nov 2013 16:50:00 GMT"); // I'm keeping this updated
request.headers().add(L"x-ms-version", L"2012-02-12");
request.set_request_uri(L"/dir/blob.dat");
auto ret = client.request(request).then([](http_response response)
{
std::wcout << response.status_code() << std::endl << response.to_string() << std::endl;
});
The storage accounts were created a few days ago, so no restrictions apply.
Also, the destination dir is empty (account2 /dir/blob.dat is not existing).
//EDIT2:
I did more testing and found out this: Uploading a new page blob (few MB) then copying it to another storage account worked!
Then I tried to rename the 12GB page blob which I wasn't able to copy (renamed from mydisk.vhd to test.dat) and suddenly the copy to another storage worked as well!
But the next problem is: After renaming the test.dat back to mydisk.vhd in the destination storage account, I cannot create a disk from it (error like "not a valid vhd file"). But the copy is already done (x-ms-copy-status: success).
What could be the problem now?
(Oh I forgot: the source mydisk.vhd lease status was "unlocked" before copying)
//EDIT3:
Well, it seems that the problem has solved itself... even with the original mydisk.vhd I wasn't able to create a disk again (invalid vhd). I don't know why as I didnt alter it, but I created it on the xbox one launch day, it was all quite slow so maybe something went wrong there. Now, as I created a new VM, I can copy the .vhd over to another storage without problems (after deleting the disk).
I would suggest using AzCopy - Cross Account Copy Blob.
Check it out here:
http://blogs.msdn.com/b/windowsazurestorage/archive/2013/04/01/azcopy-using-cross-account-copy-blob.aspx

Mounted cloud drive(page blob) getting deleted

We have Azure PAAS service implementation.In that in each Instance we have 10 customer and each customer owns a mounted cloud drive(page blob) to store the some files.
This deployment is available for last one year in azure .
For last 2-3 weeks we observe that is 1-2 cloud drive(page blob) getting un-mounted from this instance .We got some error information from the System log of event viewer which is added and this error is also not consistent. Currently as work around we are rebooting the Instance daily which we remount the vhd (pageblob) again.
Guest OS version-1.18
Azure SDK 1.7
Please let us know what is reason for this issue?
Error details
Log Name: System
Source: PlugPlayManager
Date: 4/22/2013 11:10:50 AM
Event ID: 12
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: RD00155D477FE9
Description:
The device 'Msft VHD Disk SCSI Disk Device' (SCSI\Disk&Ven_Msft&Prod_VHD_Disk\1&26c3c0c&0&000002) disappeared from the system without first being prepared for removal.
Log Name: System
Source: WaDrivePrt
Date: 4/22/2013 11:10:49 AM
Event ID: 4
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: RD00155D477FE9
Description:
'/lwe_2f44e5e3.vhd' failed to renew lease the specified XDisk.

Resources