I would like to update static website assets from github repos. The documentation suggests to use an action based on
az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s .
If I see this correct, this copies all files regardless of the changes. Even if only one file was altered. Is it possible to only transfer files that have been changed? Like rsync does.
Else I would try to judge the changed files based on the git history and only transfer them. Please also answer, if you know an existing solution in this direction.
You can use azcopy sync to achieve that. That is a different tool, though.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs-synchronize?toc=/azure/storage/blobs/toc.json
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-sync
Based on the sugggestion by #4c74356b41, I discovered that the mentioned tool was recently integrated into the az tool.
It can be used the same way as az storage blob upload-batch. The base command is:
az storage blob sync
Related
I set up a storage account (Blob, v2) with two containers. I uploaded a test excel file into one of the containers. Now I would like to use Azure Cloudshell PowerShell in order to copy that file from one of the containers and insert it to the other.
Does anyone know what command(s) I've got to type in there? (command, src-format, dest-format)
Thanks in advance
PS:
cp https://...blob... https://...blob...
returns "cannot stat 'https://...blob...': no such file or directory"
Glad that # T1B for solved the issue. Thank you #holger for the workaround that helped to fix the issue. Posting this on behalf of your discussion and few points so that it will be beneficial for other community members.
To copy the files between containers we can use the below cmdlts after azcopy login. So that we can able to copy the files within
container as mentioned in this MICROSOFT DOCUMENT .
azcopy copy 'https://staccount.blob.core.windows.net/test1/Stack Overflow.xlsx' 'https://destStaccount.blob.core.windows.net/test2/Stack Overflow.xlsx' --recursive
To do the above make sure that we have sufficient permissions to that storage account likewise storage blob data contributor or owner role.
For more information please refer this similar SO THREAD| How to copy files from one container to another containers fits equally in all dest containers according to size using powershell
I am writing a PowerShell core task in azure pipeline in order to backup my table storage using azcopy, from what I found only the version 7 of azcopy supports the table storage, my host is Linux and I can't find a command that works, I tried this but didn't work :
azcopy -source https://myaccount.table.core.windows.net/tablename --destination https://myaccount.blob.core.windows.net/containername --source-key $input1 --dest-key $input2
Any idea how the command should be? thanks
Azcopy on linux does not support Azure table storage. For more details, please refer to here and here
If you want to use the azcopy to export Azure table, we need to use the azopy V7 on windows. For more details, please refer to here
Regarding how to do that, please refer to here
For example
Install Azcopy
Script
azcopy /Source:https://andyprivate.table.core.windows.net/log /Dest:https://andyprivate.blob.core.windows.net/copy/tablelog /SourceKey:<key> /DestKey:<key> /PayloadFormat:CSV
Besides, if your Azure table is very big, I suggest you use Azure data factory. Regarding how to do that, please refer to the official document and the official document.
Is there a way to get data from Azure Storage like dacpac, zip etc and put in drop folder in CI/CD pipeline?
hm, for saving files to Azure Storage, there is a Azure File Copy task. So you probably have to either use PowerShell (like the Set-AzStorageBlobContent cmdlet) or using the azcopy CLI (you might have to find an image that contains the binary)
Goal is to copy straight into my blob container named "$web".
Problem is, dollar signs seem to break AzCopy's location parsing...
AzCopy.exe /Source:"C:\temp\" /Dest:"https://mystorage.blob.core.windows.net/$web" /DestKey:"..." /SetContentType /V
Invalid location 'https://mystorage.blob.core.windows.net/$web', address could not be parsed.
I don't get to choose the container name. Escaping the $, aka
\$
didn't work.
How can I workaround this? Insights appreciated. Thanks!
#Gaurav has pointed out the problem. For now Azcopy can only recognize the dollar sign with $root container. Also test in powershell, no breaking, but files are just uploaded to $root despite the name after $.
The new feature generating this $web container--Static website hosting for Azure Storage has just released. It may take time for Azcopy to catch up the change.
Have opened an issue, you can subscribe it for progress.
Update
Latest v7.3.0 Azcopy has supported this feature, and for VSTS users, Azure File Copy v2 task(2.0.7) is working with this latest version as well.
To future readers who may be tempted to use pre-baked VSTS tasks like File Copy (which uses AzCopy under the hood), I recommend considering the Azure CLI task instead, e.g.
az storage blob upload-batch --account-name myAccountName --source mySource -d $web
My client wasn't willing to wait for a schedule they didn't control so switching to the CLI path moved our dependency one level upstream & removed having to wait on the VSTS release cadence (looks like ~6 weeks this time).
Thanks Jerry for posting back, kudos! In my VSTS I see File Copy v2.0 Preview seems to be available and ostensibly fixes this issue. Static website hosting direct from Azure storage is a nice feature and I'm happy Azure offers it.
(I hope in the future MS may be able to improve cross-org communication so savvy users keen to checkout new feature releases can have a more consistent experience across all the public-facing surface area.)
The accepted answer is a viable workaround suggesting using az storage blob upload-batch but the blob destination argument $web needs to be single quoted to work in PowerShell. Otherwise PowerShell will refer to a variable with the name "web"
E.g. Upload the current directory: az storage blob upload-batch --account-name myaccountname --source . -d '$web'
the dollar sign works fine if you execute azcopy via cmd. if you use powershell, you have to escape the $ sign with `
so instead of:
azcopy list "https://mystorage.blob.core.windows.net/$web?..."
# or
azcopy copy "c:\temp" "https://mystorage.blob.core.windows.net/$web?..."
use:
azcopy list "https://mystorage.blob.core.windows.net/`$web?..."
# or
azcopy "c:\temp" "https://mystorage.blob.core.windows.net/`$web?..."
btw.: I received the following errors when I did not escape the dollar sign:
failed to traverse container: cannot list files due to reason -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.15.0/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=OutOfRangeInput) =====
Description=The specified resource name length is not within the permissible limits.
I just want to clean (dump, zap, del .) an Azure Blob container. How can I do that?
Note: The container is used by IIS (running Webrole) logs (wad-iis-logfiles).
A one liner using the Azure CLI 2.0:
az storage blob delete-batch --account-name <storage_account_name> --source <container_name>
Substitute <storage_account_name> and <container_name> by the appropriate values in your case.
You can see the help of the command by running:
az storage blob delete-batch -h
There is only one way to bulk delete blobs and that is by deleting the entire container. As you've said there is a delay between deleting the container and when you can use that container name again.
Your only other choice is to delete the one at a time. If you can do the deleting from the same data centre where the blobs are stored it will be faster than running the delete locally. This probably means writing code (or you could RDP into one of your instances and install cloud explorer). If you're writing code then you can speed up the overall process by deleting the items in parallel. Something similar to this would work:
Parallel.ForEach(myCloudBlobClient.GetContainerReference(myContainerName).ListBlobs(), x => ((CloudBlob) x).Delete());
Update: Easier way to do it now (in 2018) is to use the Azure CLI. Check joanlofe's answer :)
Easiest way to do it in 2016 is using Microsoft Azure Storage Explorer IMO.
Download Azure Storage Explorer and install it
Sign in with the appropriate Microsoft Account
Browse to the container you want to empty
Click on the Select All button
Click on the Delete button
Try using cloudberry product for windows azure
this is the link: http://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx
you can search in the blob for specific extension. select multiple blobs and delete them
If you mean you want to delete a container. I would like to suggest you to check http://msdn.microsoft.com/en-us/library/windowsazure/dd179408.aspx to see if Delete Container operation (The container and any blobs contained within it are later deleted during garbage collection) could fulfill the requirement.
If you are interested in a CLI way, then the following piece of code will help you out:
for i in `az storage blob list -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table | awk {'print $1'} | sed '1,2d' | sed '/^$/d'`; do az storage blob delete --name $i -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table; done
It first fetches the list of blobs in the container and deletes them one by one.
If you are using a spark (HDInsight) cluster which has access to that storage account, then you can use HDFS commands on the command line;
hdfs dfs -rm -r wasbs://container_name#account_name.blob.core.windows.net/path_goes_here
The real benefit is that the cluster is unlikely to go down, and if you have screen running on it, then you won't lose your session whilst you delete away.
For This case the better option is to identify the list of item found in the container. then delete each item from the container. That is the best option. If you delete the container you should have a run time error on the next time...
You can use Cloud Combine to delete all the blobs in your Azure container.