AzCopy - breaks on location with $ (dollar sign) - azure

Goal is to copy straight into my blob container named "$web".
Problem is, dollar signs seem to break AzCopy's location parsing...
AzCopy.exe /Source:"C:\temp\" /Dest:"https://mystorage.blob.core.windows.net/$web" /DestKey:"..." /SetContentType /V
Invalid location 'https://mystorage.blob.core.windows.net/$web', address could not be parsed.
I don't get to choose the container name. Escaping the $, aka
\$
didn't work.
How can I workaround this? Insights appreciated. Thanks!

#Gaurav has pointed out the problem. For now Azcopy can only recognize the dollar sign with $root container. Also test in powershell, no breaking, but files are just uploaded to $root despite the name after $.
The new feature generating this $web container--Static website hosting for Azure Storage has just released. It may take time for Azcopy to catch up the change.
Have opened an issue, you can subscribe it for progress.
Update
Latest v7.3.0 Azcopy has supported this feature, and for VSTS users, Azure File Copy v2 task(2.0.7) is working with this latest version as well.

To future readers who may be tempted to use pre-baked VSTS tasks like File Copy (which uses AzCopy under the hood), I recommend considering the Azure CLI task instead, e.g.
az storage blob upload-batch --account-name myAccountName --source mySource -d $web
My client wasn't willing to wait for a schedule they didn't control so switching to the CLI path moved our dependency one level upstream & removed having to wait on the VSTS release cadence (looks like ~6 weeks this time).
Thanks Jerry for posting back, kudos! In my VSTS I see File Copy v2.0 Preview seems to be available and ostensibly fixes this issue. Static website hosting direct from Azure storage is a nice feature and I'm happy Azure offers it.
(I hope in the future MS may be able to improve cross-org communication so savvy users keen to checkout new feature releases can have a more consistent experience across all the public-facing surface area.)

The accepted answer is a viable workaround suggesting using az storage blob upload-batch but the blob destination argument $web needs to be single quoted to work in PowerShell. Otherwise PowerShell will refer to a variable with the name "web"
E.g. Upload the current directory: az storage blob upload-batch --account-name myaccountname --source . -d '$web'

the dollar sign works fine if you execute azcopy via cmd. if you use powershell, you have to escape the $ sign with `
so instead of:
azcopy list "https://mystorage.blob.core.windows.net/$web?..."
# or
azcopy copy "c:\temp" "https://mystorage.blob.core.windows.net/$web?..."
use:
azcopy list "https://mystorage.blob.core.windows.net/`$web?..."
# or
azcopy "c:\temp" "https://mystorage.blob.core.windows.net/`$web?..."
btw.: I received the following errors when I did not escape the dollar sign:
failed to traverse container: cannot list files due to reason -> github.com/Azure/azure-storage-blob-go/azblob.newStorageError, /home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-go#v0.15.0/azblob/zc_storage_error.go:42
===== RESPONSE ERROR (ServiceCode=OutOfRangeInput) =====
Description=The specified resource name length is not within the permissible limits.

Related

Azure Cloudshell Powershell Copy Blob between Containers

I set up a storage account (Blob, v2) with two containers. I uploaded a test excel file into one of the containers. Now I would like to use Azure Cloudshell PowerShell in order to copy that file from one of the containers and insert it to the other.
Does anyone know what command(s) I've got to type in there? (command, src-format, dest-format)
Thanks in advance
PS:
cp https://...blob... https://...blob...
returns "cannot stat 'https://...blob...': no such file or directory"
Glad that # T1B for solved the issue. Thank you #holger for the workaround that helped to fix the issue. Posting this on behalf of your discussion and few points so that it will be beneficial for other community members.
To copy the files between containers we can use the below cmdlts after azcopy login. So that we can able to copy the files within
container as mentioned in this MICROSOFT DOCUMENT .
azcopy copy 'https://staccount.blob.core.windows.net/test1/Stack Overflow.xlsx' 'https://destStaccount.blob.core.windows.net/test2/Stack Overflow.xlsx' --recursive
To do the above make sure that we have sufficient permissions to that storage account likewise storage blob data contributor or owner role.
For more information please refer this similar SO THREAD| How to copy files from one container to another containers fits equally in all dest containers according to size using powershell

Azure container copy only changes

I would like to update static website assets from github repos. The documentation suggests to use an action based on
az storage blob upload-batch --account-name <STORAGE_ACCOUNT_NAME> -d '$web' -s .
If I see this correct, this copies all files regardless of the changes. Even if only one file was altered. Is it possible to only transfer files that have been changed? Like rsync does.
Else I would try to judge the changed files based on the git history and only transfer them. Please also answer, if you know an existing solution in this direction.
You can use azcopy sync to achieve that. That is a different tool, though.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs-synchronize?toc=/azure/storage/blobs/toc.json
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-sync
Based on the sugggestion by #4c74356b41, I discovered that the mentioned tool was recently integrated into the az tool.
It can be used the same way as az storage blob upload-batch. The base command is:
az storage blob sync

using AZCOPY with only connectionstring

I am provided with only a 'connection string' for azcopy.
Connectionstring: DefaultEndpointsProtocol=https;AccountName=someaccoutname;AccountKey=someaccountkey;EndpointSuffix=core.windows.net
URL: https://someaccoutname.blob.core.windows.net/somename
I do not have a 'sas' token or access to the azure portal to create a sas token'.
How can I use AZCOPY to sync a folder on a VM, to that azure storage account, with only the connection string.
Use Azure Storage Explorer to download the SAS needed. Download and install it from here.
When it opens, connect to your storage account using the connection string, navigate to the container and keep it selected in the left containers pane.
Bottom left -> change from Properties tab to Actions -> Get Shared Access Signature (that's SAS)
Set the expiry date/time, check on permissions you'll need. (Add, Write etc if you want to upload something, probably Delete as well if you want to over-write older files). Create.
Copy the URL. This will be your destination string, with the SAS included.
Note: if there's a $ sign in it, replace with "%24" - at least for linux that seems to be required.
Now form your azcopy command (uploading a folder here)
azcopy copy --from-to=LocalBlob "localfolder/" "destination-with-sas" --recursive
That is simple.
The connection string has 2 information you need.
[account] = someaccoutname
[acesskey] = somthing like this 2iusdofiausd98273412934213/fsdf23409237409dfoasihdfasir9028742hvhxczoivhsadfSFAOIf34Jq==
azcopy cp https://[account].blob.core.windows.net/folder/subfolder/file.txt?[accesskey from connection string, it ends with ==] c:\temp\
You can use az storage container generate-sas to generate the SAS token, see https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/generate-sas-tokens

Clear an App Service instance and upload new content from a zip file

On App Service, what's the best way of deploying new content from a zip file, such that it replaces any existing content?
Please note:
I am running on linux
I cannot use msdeploy
I cannot use git
I cannot use VSTS
It needs to be simple
It cant be prone to timing out
It has to be supported by all subscription levels of App Service
Commands should only return after their respective operation(s) have completed
I have access to ARM templates
Provided it isn't as difficult, I'm sure I could upload files to storage blobs
For more information, see this discussion here: https://github.com/projectkudu/kudu/issues/2367
There is a solution that consists in calling the ARM msdeploy provider to deploy a cloud hosted zip package. This requires no msdeploy on your client, so the fact that msdeploy technology is involved is mostly an implementation detail you can ignore.
There are a couple gotchas that I will call out at the end.
The steps are:
First, get your zip hosted in the cloud. e.g. I have a test one here you can play around with: https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip (note that this zip uses special msdeploy packaging, but you can also use a plain old zip with just your files).
Then run the following command using cli 2.0, replacing your resource group, app name and zip url:
az resource update --resource-group MyRG --namespace Microsoft.Web --parent sites/MySite --resource-type Extensions --name MSDeploy --set properties.packageUri=https://davidebbostorage.blob.core.windows.net/arm/FunctionMsDeploy.zip --api-version 2015-08-01
This will result in the package getting deployed to your wwwroot, and any existing content that's not in the zip getting deleted. It's efficient as it won't touch any files that already exist and are identical to what's in the zip. So it's far faster than trying to clean out everything and unzipping clean (but results are identical).
Now a couple gotchas:
Due to what seems like a bug in CLI 2.0, I wasn't able to pass a URL that contains an equal sign, which rules out SAS URLs. I'll report that to them. For now, test the process with a public zip, like my test package above.
The command line is more complex than it should be. I will also ask the CLI team about this.

How to clean an Azure storage Blob container?

I just want to clean (dump, zap, del .) an Azure Blob container. How can I do that?
Note: The container is used by IIS (running Webrole) logs (wad-iis-logfiles).
A one liner using the Azure CLI 2.0:
az storage blob delete-batch --account-name <storage_account_name> --source <container_name>
Substitute <storage_account_name> and <container_name> by the appropriate values in your case.
You can see the help of the command by running:
az storage blob delete-batch -h
There is only one way to bulk delete blobs and that is by deleting the entire container. As you've said there is a delay between deleting the container and when you can use that container name again.
Your only other choice is to delete the one at a time. If you can do the deleting from the same data centre where the blobs are stored it will be faster than running the delete locally. This probably means writing code (or you could RDP into one of your instances and install cloud explorer). If you're writing code then you can speed up the overall process by deleting the items in parallel. Something similar to this would work:
Parallel.ForEach(myCloudBlobClient.GetContainerReference(myContainerName).ListBlobs(), x => ((CloudBlob) x).Delete());
Update: Easier way to do it now (in 2018) is to use the Azure CLI. Check joanlofe's answer :)
Easiest way to do it in 2016 is using Microsoft Azure Storage Explorer IMO.
Download Azure Storage Explorer and install it
Sign in with the appropriate Microsoft Account
Browse to the container you want to empty
Click on the Select All button
Click on the Delete button
Try using cloudberry product for windows azure
this is the link: http://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx
you can search in the blob for specific extension. select multiple blobs and delete them
If you mean you want to delete a container. I would like to suggest you to check http://msdn.microsoft.com/en-us/library/windowsazure/dd179408.aspx to see if Delete Container operation (The container and any blobs contained within it are later deleted during garbage collection) could fulfill the requirement.
If you are interested in a CLI way, then the following piece of code will help you out:
for i in `az storage blob list -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table | awk {'print $1'} | sed '1,2d' | sed '/^$/d'`; do az storage blob delete --name $i -c "Container-name" --account-name "Storage-account-name" --account-key "Storage-account-access-key" --output table; done
It first fetches the list of blobs in the container and deletes them one by one.
If you are using a spark (HDInsight) cluster which has access to that storage account, then you can use HDFS commands on the command line;
hdfs dfs -rm -r wasbs://container_name#account_name.blob.core.windows.net/path_goes_here
The real benefit is that the cluster is unlikely to go down, and if you have screen running on it, then you won't lose your session whilst you delete away.
For This case the better option is to identify the list of item found in the container. then delete each item from the container. That is the best option. If you delete the container you should have a run time error on the next time...
You can use Cloud Combine to delete all the blobs in your Azure container.

Resources