I have a confusion about the --put-md5 parameter. When I use azcopy command to upload a file to my Azure Storage account without --put-md5 parameter, the uploaded blob's Content-MD5 property seem to be created defaultly. However, the AzCopyV10.0.9 Preview Release illustrates that "as of version 10.0.9, MD5 hashes are NOT created by default". Could you please help to check it? See the screenshots:
CMD
Portal
Thanks.
I tested on my side and got same result.MD5 hash is calculated and stored automatically.
This seems to be an azcopy bug.
Here is an opened issue in Microsoft Github azcopy repository.
https://github.com/Azure/azure-storage-azcopy/issues/1315
Reply from Microsoft
For smaller blobs (<256MB IIRC), the service computes it automatically for you.
Related
I set up a storage account (Blob, v2) with two containers. I uploaded a test excel file into one of the containers. Now I would like to use Azure Cloudshell PowerShell in order to copy that file from one of the containers and insert it to the other.
Does anyone know what command(s) I've got to type in there? (command, src-format, dest-format)
Thanks in advance
PS:
cp https://...blob... https://...blob...
returns "cannot stat 'https://...blob...': no such file or directory"
Glad that # T1B for solved the issue. Thank you #holger for the workaround that helped to fix the issue. Posting this on behalf of your discussion and few points so that it will be beneficial for other community members.
To copy the files between containers we can use the below cmdlts after azcopy login. So that we can able to copy the files within
container as mentioned in this MICROSOFT DOCUMENT .
azcopy copy 'https://staccount.blob.core.windows.net/test1/Stack Overflow.xlsx' 'https://destStaccount.blob.core.windows.net/test2/Stack Overflow.xlsx' --recursive
To do the above make sure that we have sufficient permissions to that storage account likewise storage blob data contributor or owner role.
For more information please refer this similar SO THREAD| How to copy files from one container to another containers fits equally in all dest containers according to size using powershell
I would like to update static website assets from github repos. The documentation suggests to use an action based on
az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s .
If I see this correct, this copies all files regardless of the changes. Even if only one file was altered. Is it possible to only transfer files that have been changed? Like rsync does.
Else I would try to judge the changed files based on the git history and only transfer them. Please also answer, if you know an existing solution in this direction.
You can use azcopy sync to achieve that. That is a different tool, though.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs-synchronize?toc=/azure/storage/blobs/toc.json
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-sync
Based on the sugggestion by #4c74356b41, I discovered that the mentioned tool was recently integrated into the az tool.
It can be used the same way as az storage blob upload-batch. The base command is:
az storage blob sync
I copied a container to another storage account based on the document linked below.
(DataLake Storage Gen2).
When trying, I got the following error:
this request not authorized to perform this operations using this permission
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
If you are using AAD Token, this error is telling you that you need to add a role assignment to the user. Please go to Storage account -> Access Control -> Add -> Add role assignment, then add Storage Blob Data Owner to your login account.
If this problem persists, please provide more details.
I also faced the same problem. For me to work I just log out and log in again on the azcopy cli after doing the #BowmanZhu solution
azcopy logout
azcopy login --tenant-id xxxx-xxxx-xxxx
If you don't want to login that way there is always the option to add a SAS token at the end of the URL. If you don't want to attach the token always at the end you can try for permanent access by going through any one of these steps you find in the official documentation page.
After granting myself with role Storage Blob Data Owner on the container, then AzCopy will now behave itself and succeed in copying a file to the blob storage container.
go to storageaccount -> container -> Access control rules -> add role assignement -> Storage Blob Data Owner
In my case, my azure storage account vnet address was blocking the azcopy from copying the data over the storage account.
I added my client IP to allow a firewall address.
The SAS token has probably expired.
When I had this, I discovered it was because I'd used Azure Storage Explorer to generate a SAS that didn't have read permission, and I think it was trying to read the size/existence of a blob before writing it.
I got a clue from https://github.com/Azure/azure-storage-azcopy/issues/790 but ultimately I just regenerated a new SAS with read permission and it worked out..
I probably could ahve looked to modify the C# code using Azure Data Movement lib, to not perform a length check, but the spec was later changed to "don't overwrite" so the read permissions are probably needed anyway
Give appropriate permissions(read, write, create) while generating SAS tokens
as here
Had a similar issue. That's how was resolved
Command used was .\azcopy.exe copy "C:\Users\kriof\Pictures" "https://test645676535storageaccount.blob.core.windows.net/images?sp=rw&st=2022-02-23T11:03:50Z&se=2022-02-23T19:03:50Z&spr=https&sv=2020-08-04&sr=c&sig=QRN%2SMFtU3zaUdd4adRddNFjM2K4ik7tNPSi2WRL0%3D"
SAS token had default(Read) permission only. Adding Write permission
in Azure Portal, resolved the issue.
Goal is to copy straight into my blob container named "$web".
Problem is, dollar signs seem to break AzCopy's location parsing...
AzCopy.exe /Source:"C:\temp\" /Dest:"https://mystorage.blob.core.windows.net/$web" /DestKey:"..." /SetContentType /V
Invalid location 'https://mystorage.blob.core.windows.net/$web', address could not be parsed.
I don't get to choose the container name. Escaping the $, aka
\$
didn't work.
How can I workaround this? Insights appreciated. Thanks!
#Gaurav has pointed out the problem. For now Azcopy can only recognize the dollar sign with $root container. Also test in powershell, no breaking, but files are just uploaded to $root despite the name after $.
The new feature generating this $web container--Static website hosting for Azure Storage has just released. It may take time for Azcopy to catch up the change.
Have opened an issue, you can subscribe it for progress.
Update
Latest v7.3.0 Azcopy has supported this feature, and for VSTS users, Azure File Copy v2 task(2.0.7) is working with this latest version as well.
To future readers who may be tempted to use pre-baked VSTS tasks like File Copy (which uses AzCopy under the hood), I recommend considering the Azure CLI task instead, e.g.
az storage blob upload-batch --account-name myAccountName --source mySource -d $web
My client wasn't willing to wait for a schedule they didn't control so switching to the CLI path moved our dependency one level upstream & removed having to wait on the VSTS release cadence (looks like ~6 weeks this time).
Thanks Jerry for posting back, kudos! In my VSTS I see File Copy v2.0 Preview seems to be available and ostensibly fixes this issue. Static website hosting direct from Azure storage is a nice feature and I'm happy Azure offers it.
(I hope in the future MS may be able to improve cross-org communication so savvy users keen to checkout new feature releases can have a more consistent experience across all the public-facing surface area.)
The accepted answer is a viable workaround suggesting using az storage blob upload-batch but the blob destination argument $web needs to be single quoted to work in PowerShell. Otherwise PowerShell will refer to a variable with the name "web"
E.g. Upload the current directory: az storage blob upload-batch --account-name myaccountname --source . -d '$web'
the dollar sign works fine if you execute azcopy via cmd. if you use powershell, you have to escape the $ sign with `
so instead of:
azcopy list "https://mystorage.blob.core.windows.net/$web?..."
# or
azcopy copy "c:\temp" "https://mystorage.blob.core.windows.net/$web?..."
use:
azcopy list "https://mystorage.blob.core.windows.net/`$web?..."
# or
azcopy "c:\temp" "https://mystorage.blob.core.windows.net/`$web?..."
btw.: I received the following errors when I did not escape the dollar sign:
failed to traverse container: cannot list files due to reason -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.15.0/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=OutOfRangeInput) =====
Description=The specified resource name length is not within the permissible limits.
I am trying to copy some blob files from one storage account to another one. I am using AzCopy in order to fulfill this goal.
The process works for copying files between containers within the same storage account, but not between different storage accounts.
The command I am issuing is:
AzCopy /Source:https://<storage_account1>.blob.core.windows.net/<container_name1>/<path_to_desired_blobs> /Dest:https://<storage_account2>.blob.core.windows.net/<container_name2>/<path_to_store>/ /SourceKey:<source_key> /DestKey:<dest_key> /Pattern:<some_pattern> /S
The error I am getting is the following:
The remote server returned an error: (400) Bad Request.
Could not verify the copy source within the specified time.
RequestId:
Time:2016-04-01T19:33:01.0527460Z
The only difference between the two storage accounts is that one is Standard, whereas the other one is Premium.
Any help will be appreciated!
From your description, you're trying to copy Block Blob from source account to Page Blob in destination account, which is not supported in Azure Storage Service and AzCopy.
To work around it, you can firstly use AzCopy to download the Block Blobs from source account to local file system, and then upload them from local file system to destination account with option /BlobType:Page (this option is only valid when uploading from local to blob).
Premium Storage only supports page blobs. Please confirm that you are copying page blobs from standard to premium storage account. Also, specify the BlobType parameter to "page" in order to copy the data as page blobs into destination premium storage account.
From the description, I am assuming your source blob is a block blob. Azure's "Async Copy Blob" process (which is used by AzCopy as the default method) preserves the blob type. That is, you cannot convert a blob type from Block to Page through async copy blob.
Instead, can you try AzCopy again with "/SyncCopy" option along with "/BlobType:page" parameter? That might help change the destination blob type to Page.
(If that doesn't work, only other solution would be to first download the blob, and then upload it with "/BlobType:page")