I'm trying to upload a sample file to Azure from my Ubuntu machine using AzCopy for Linux but I keep getting the below error no matter what permission/ownership I change to.
$ azcopy --source ../my_pub --destination https://account-name.blob.core.windows.net/mycontainer --dest-key account-key
Incomplete operation with same command line detected at the journal directory "/home/jmis/Microsoft/Azure/AzCopy", do you want to resume the operation? Choose Yes to resume, choose No to overwrite the journal to start a new operation. (Yes/No) Yes
[2017/11/18 22:06:24][ERROR] Error parsing source location "../my_pub": Failed to enumerate directory /home/jmis/my_pub/ with file pattern *. Cannot find the path '/home/jmis/my_pub/'.
I have digged over the internet to find solutions, without having a luck I eventually ended up asking a question here.
Although AzCopy was having issues for Linux I'm able to do the above operation seamlessly with Azure CLI. The below code listed on Azure docs helped me do it:
#!/bin/bash
# A simple Azure Storage example script
export AZURE_STORAGE_ACCOUNT=<storage_account_name>
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>
export container_name=<container_name>
export blob_name=<blob_name>
export file_to_upload=<file_to_upload>
export destination_file=<destination_file>
echo "Creating the container..."
az storage container create --name $container_name
echo "Uploading the file..."
az storage blob upload --container-name $container_name --file $file_to_upload --name $blob_name
echo "Listing the blobs..."
az storage blob list --container-name $container_name --output table
echo "Downloading the file..."
az storage blob download --container-name $container_name --name $blob_name --file $destination_file --output table
echo "Done"
Going forward I will be using the Cool Azure CLI which is Linux compliant and Simple too.
We can use this script to upload single file with Azcopy(Linux):
azcopy \
--source /mnt/myfiles \
--destination https://myaccount.file.core.windows.net/myfileshare/ \
--dest-key <key> \
--include abc.txt
Use --include to specify which file you want to upload, here a example, please check it:
root#jasonubuntu:/jason# pwd
/jason
root#jasonubuntu:/jason# ls
test1
root#jasonubuntu:/jason# azcopy --source /jason/ --destination https://jasondisk3.blob.core.windows.net/jasonvm/ --dest-key m+kQwLuQZiI3LMoMTyAI8K40gkOD+ZaT9HUL3AgVr2KpOUdqTD/AG2j+TPHBpttq5hXRmTaQ== --recursive --include test1
Finished 1 of total 1 file(s).
[2017/11/20 07:45:57] Transfer summary:
-----------------
Total files transferred: 1
Transfer successfully: 1
Transfer skipped: 0
Transfer failed: 0
Elapsed time: 00.00:00:02
root#jasonubuntu:/jason#
More information about Azcopy on Linux, please refer to this link.
Related
I'm trying to copy pod's repository directly to azure storage account using a pipe.
Instead of doing these two commands :
kubectl cp my_pod:my_repository/ . -n my_namespace
azcopy cp my_repository/ "https://my-storage.blob.core.windows.net/?sp=r..." --recursive=true
I would like to do something like this using "--from-to" azcopy parameter :
kubectl cp my_pod:my_repository/ -n my_namespace | azcopy cp "https://my-storage.blob.core.windows.net/?sp=r..." --from-to PipeBlob --recursive=true
Not sure if it's possible. maybe with xargs ?
I Hope I'm clear enough.
I am using the Azure CLI to add blobs to my storage account. Via the Azure CLI, I am successfully able to soft delete blobs; I can confirm this by viewing the soft-deleted blobs on the Azure Portal. I want to restore a blob that I delete via the Azure CLI again, but I am having trouble. I have attempted to use the az storage blob undelete command to do this. It is reportedly successful - I know this by adding the --verbose flag and seeing the 200 HTTP Status returned from the API call that the CLI triggers. The response is:
{
"undeleted": null
}
And when I look at the list of blobs in the Azure Portal again, there is no indication that the blob was actually restored/undeleted. Has anyone else had success using the undelete Azure CLI command previously?
Here is some terminal output; hopefully it is helpful in understanding what I'm trying to do:
PS C:\Users\admin> az storage blob list --account-name azartbackupstore01 -c backupcontainer01 -o table
Name IsDirectory Blob Type Blob Tier Length Content Type Last Modified Snapshot
------------------------------------------------------------------- ------------- ----------- ----------- -------- ------------------------ ------------------------- ----------
20/20162F8E84F43EEAAEC0DB0010545C32D8D1A0CF60284CA2E9A57884B55C2445 BlockBlob 47 application/octet-stream 2021-08-05T15:25:59+00:00
92/92D536261E45E93DB4A8F063A98102BF443DD7EC16B1075F7D13A1A326544035 BlockBlob 11458 application/octet-stream 2021-08-05T15:22:47+00:00
PS C:\Users\admin> az storage blob delete --account-name azartbackupstore01 -c backupcontainer01 --name 20/20162F8E84F43EEAAEC0DB0010545C32D8D1A0CF60284CA2E9A57884B55C2445
PS C:\Users\admin> az storage blob undelete --account-name azartbackupstore01 -c backupcontainer01 --name 20/20162F8E84F43EEAAEC0DB0010545C32D8D1A0CF60284CA2E9A57884B55C2445
{
"undeleted": null
}
PS C:\Users\admin> az storage blob list --account-name azartbackupstore01 -c backupcontainer01 -o table
Name IsDirectory Blob Type Blob Tier Length Content Type Last Modified Snapshot
------------------------------------------------------------------- ------------- ----------- ----------- -------- ------------------------ ------------------------- ----------
92/92D536261E45E93DB4A8F063A98102BF443DD7EC16B1075F7D13A1A326544035 BlockBlob 11458 application/octet-stream 2021-08-05T15:22:47+00:00
Apparently when you have soft-delete and versioning enabled on blobs, something weird is happening (even in the Azure portal where the blob is shown as deleted but the blob state is null, and the versions of the deleted blob are still shown as active).
But I found some kind of a workaround.
In short:
Get the (latest) versionId of the blob you want to undelete
Get the blob URI and add the versionId and SAS token as query parameters to the URI.
Copy the blob where source URI is the deleted blob including versionId (I found this solution in the code here)
When a versioned blob is soft-deleted, it will show up with the command:
az storage blob list --account-name azartbackupstore01 -c backupcontainer01 -o table --include v
I only added the --include (v)ersion at the end which will show all versions of the blobs. The --include (d)eleted will not work, because the blob somehow does not have have a state deleted.
Here is how I've done it:
$blobName="20/20162F8E84F43EEAAEC0DB0010545C32D8D1A0CF60284CA2E9A57884B55C2445"
$containerName="backupcontainer01"
$accountName="azartbackupstore01"
$sas="replace with your sas token"
# query all blobs where name equals $blobName, and reverse sort by versionId (which is the date) so most recent will be the first in the list
$versionId=az storage blob list --account-name $accountName -c $containerName --include v -o json --query "reverse(sort_by([?name=='$blobName'], &versionId))[0].versionId"
$blobUriRoot=az storage blob url --account-name $accountName -c $containerName --name $blobName
# The blobUriRoot and versionId variables are outputted with additional quotes, so these need to be replaced.
$blobUri=$($blobUriRoot + "?versionId=" + $versionId).Replace('"', "")
$blobUriWithSas = $blobUri + "&" + $sas
az storage blob copy start --account-name $accountName --destination-blob $blobName --destination-container $containerName --source-uri $blobUriWithSas
After running above commands, the specified blob is active again.
i want to get the data of my files in azure storage using cli for that i am using query command but i want size of file using its creation date how can i achieve this i have use below command
bytes=az storage blob list \ --container-name mycontainer \ --query "[*].[properties.contentLength]" \ --output tsv | paste --serial --delimiters=+ | bc
Display total bytes
echo "Total bytes in container: $bytes"
but as i used * its gives me size of whole container i want specific file size or the size of file which is created today
When you want to get the size of the files created today, you can use the function contains in JMESPath like this:
bytes=$(az storage blob list --container-name terraform-container --account-name charlesstore --query "[?contains(#.properties.creationTime, '2020-10-27')==\`true\`].properties.contentLength" -o tsv)
Change the date as you need.
I have a container called container1 in my Storage Account storageaccount1, with the following files:
blobs/tt-aa-rr/data/0/2016/01/03/02/01/20.txt
blobs/tt-aa-rr/data/0/2016/01/03/02/02/12.txt
blobs/tt-aa-rr/data/0/2016/01/03/02/03/13.txt
blobs/tt-aa-rr/data/0/2016/01/03/03/01/10.txt
I would like to delete the first 3, for that I use the following command:
az storage blob delete-batch --source container1 --account-key XXX --account-name storageaccount1 --pattern 'blobs/tt-aa-rr/data/0/2016/01/03/02/*' --debug
The files are not deleted and I see the following log:
urllib3.connectionpool : Starting new HTTPS connection (1): storageaccount1.blob.core.windows.net:443
urllib3.connectionpool : https://storageaccount1.blob.core.windows.net:443 "GET /container1?restype=container&comp=list HTTP/1.1" 200 None
What is wrong with my pattern?
If I try to delete file by file it works.
As stated in comments, you are not able to apply patterns to subfolders, only first level folders, as documented here. But if you want, you can easily write a script to list the blobs in your container, using the prefix to filter them az storage blob list and then apply the delete for each of the result blobs.
Here is what just worked for me — applied to the command you listed above.
az storage blob delete-batch --source container1 --account-key XXX --account-name storageaccount1 --pattern blobs/tt-aa-rr/data/0/2016/01/03/02/\* --debug
I didn't quote the pattern argument and I added an escape before the *. Using iTerm2 on a Mac. I didn't try --debug but the --dryrun argument was really helpful in getting it to tell me what it had matched (or not!).
I have a user that is putting a lot of whitespaces in their filenames and this is causing a download script to go bad.
To get the names of the blobs I use this:
BLOBS=$(az storage blob list --container-name $c \
--account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY \
--query "[].{name:name}" --output tsv)
What is happening for a blob like blob with space.pdf it is getting stored as [blob\twith\tspace.pdf] where \t is the tab. When I iterate in an effort to download obviously I can't get at the file.
How can I do this correctly?
You can use this command az storage blob download-batch.
I tested it in azure portal, all the blobs including whose name contains white-space are downloaded.
The command:
c=container_name
AZURE_STORAGE_ACCOUNT=xx
AZURE_STORAGE_KEY=xx
//download the blobs to clouddrive
cd clouddrive
az storage blob download-batch -d . -s $c --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_KEY
The test result: