Start-AzureStorageBlobCopy vs AzCopy: which one takes lesser time - azure

I need to move vhds from one subscription to other. I would like to know which one is better option for the same: Start-AzureStorageBlobCopy or AzCopy?
Which one takes lesser time ?

Both of them would take the same time as all they do is initiate Async Server-Side Blob Copy. They just tell the service to start copying blob from source to destination. The actual copy operation is performed by Azure Blob Storage Service. The time it would take to copy the blob would depend on a number of factors including but not limited to:
Source & destination location.
Size of the source blob.
Load on storage service.

Running AzCopy without specifying the option /SyncCopy and running PowerShell command Start-AzureStorageBlobCopy should take the same duration, because they both use server side asynchronous copy.
If you'd like to copy blobs across regions, you'd better consider specifying the option /SyncCopy while executing AzCopy in order to achieve a consistent speed because the asynchronous copying of data will run in the background of servers that being said you might see inconsistent copying speed among your “copying” operations.
If /SyncCopy option is specified, AzCopy will download the content to memory first, and then upload content back to Azure Storage. In order to achieve better performance of /SyncCopy, you are supposed to run AzCopy in the VM whose region is the same as source storage account. Besides that, the VM size (which decides bandwidth and CPU core number) will probably impact the copying performance as well.
For further information, please refer to Getting Started with the AzCopy Command-Line Utility

They don't take the same time.
I've tried to copy from one account to another and have a huge difference.
Start-AzureStorageBlobCopy -SrcBlob $_.Name -SrcContainer $Container -Context $ContextSrc -DestContainer $Container -DestBlob $_.Name -DestContext $ContextDst --Verbose
This takes about 2.5 hours.
& .\AzCopy.exe /Source:https://$StorageAccountNameSrc.blob.core.windows.net/$Container /Dest:https://$StorageAccountNameDst.blob.core.windows.net/$Container /SourceKey:$StorageAccountKeySrc /DestKey:$StorageAccountKeyDst /S
This takes several minutes.
I have about 600 Mb and about 7000 files here.
Elapsed time: 00.00:03:41
Finished 44 of total 44 file(s).
[2017/06/22 17:05:35] Transfer summary:
-----------------
Total files transferred: 44
Transfer successfully: 44
Transfer skipped: 0
Transfer failed: 0
Elapsed time: 00.00:00:08
Finished 345 of total 345 file(s).
[2017/06/22 17:06:07] Transfer summary:
-----------------
Total files transferred: 345
Transfer successfully: 345
Transfer skipped: 0
Transfer failed: 0
Elapsed time: 00.00:00:31
Do anyone know why it's so different?

In most scenarios, AzCopy is likely to be quicker than Start-AzureStorageBlobCopy due to way you would initiate the copy resulting in fewer calls to Azure API:
[AzCopy]1 call for whole container (regardless of blob count)
vs
[Start-AzureStorageBlobCopy] N number of calls due to number of blobs in container.
Initially I thought it would be same as both appear to trigger same asynchronous copies on Azure side, however on client side this would be directly visible as #Evgeniy has found in his answer.
In 1 blob in container scenario, theoretically both commands would complete at same time.
*EDIT (possible workaround): I was able to decrease my time tremendously by:
Removing console output AND
Using the -ConcurrentTaskCount switch, set to 100 in my case. Cut it down to under 5 minutes now.

AzCopy offers an SLA which the Async copy services lacks. AzCopy is designed for optimal performance. Use the/SyncCopy parameter to get a consistent copy speed.

Related

What does rotate period, bufferlogsize, and synctimeout mean exactly in Winston Azur blob storage? Explanation with simple examples are appreciated

In our project we are using winston3-azureblob-transport NPM package to store Application logs to blob storage.
However due to increase in users we are getting an error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Could anyone tell us using rotatePeriod, bufferLogSize and syncTimeout helps us to stop the error "409 - ClientOtherError - BlockCountExceedsLimit|The committed block count cannot exceed the maximum limit of 50,000 blocks".
Also provide any another alternative solution. However Winston logger should not be replaced.
The error "The committed block count cannot exceed the maximum limit of 50,000 blocks" usually occurs when the maximum limits are exceeded.
Each block in a block blob can be a different size. Based on the Service version you are using, maximum blob size differs.
If you attempt to upload a block that is larger than maximum limit your service version is supporting, the service returns status code 409(ClientOtherError - BlockCountExceedsLimit). The service also returns additional information about the error in the response, including the maximum block size permitted in bytes.
rotatePeriod: A moment format ex : YYYY-MM-DD will generate blobName.2022.03.29
bufferLogSize: A minimum number of logs before sync the blob, set to 1 if you want sync at each log.
syncTimeout: The maximum time between two sync, set to zero if you don't want.
For more in detail, please refer this link:
GitHub - agmoss/winston-azure-blob: NEW winston transport for azure blob storage by Andrew Moss agmoss

Azure Functions "The operation has timed out." for timer trigger blob archival

I have a Python Azure Functions timer trigger that is run once a day and archives files from a general purpose v2 hot storage container to a general purpose v2 cold storage container. I'm using the Linux Consumption plan. The code looks like this:
container = ContainerClient.from_connection_string(conn_str=hot_conn_str,
container_name=hot_container_name)
blob_list = container.list_blobs(name_starts_with = hot_data_dir)
files = []
for blob in blob_list:
files.append(blob.name)
for file in files:
blob_from = BlobClient.from_connection_string(conn_str=hot_conn_str,
container_name=hot_container_name,
blob_name=file)
data = blob_from.download_blob()
blob_to = BlobClient.from_connection_string(conn_str=cold_conn_str,
container_name=cold_container_name,
blob_name=f'archive/{file}')
try:
blob_to.upload_blob(data.readall())
except ResourceExistsError:
logging.debug(f'file already exists: {file}')
except ResourceNotFoundError:
logging.debug(f'file does not exist: {file}')
container.delete_blob(blob=file)
This has been working for me for the past few months with no problems, but for the past two days I am seeing this error halfway through the archive process:
The operation has timed out.
There is no other meaningful error message other than that. If I manually call the function through the UI, it will successfully archive the rest of the files. The size of the blobs ranges from a few KB to about 5 MB and the timeout error seems to be happening on files that are 2-3MB. There is only one invocation running at a time so I don't think I'm exceeding the 1.5GB memory limit on the consumption plan (I've seen python exited with code 137 from memory issues in the past). Why am I getting this error all of a sudden when it has been working flawlessly for months?
Update
I think I'm going to try using the method found here for archival instead so I don't have to store the blob contents in memory in Python: https://www.europeclouds.com/blog/moving-files-between-storage-accounts-with-azure-functions-and-event-grid
Just summarize the solution from comments for other communities reference:
As mentioned in comments, OP uses start_copy_from_url() method instead to implement the same requirements as a workaround.
start_copy_from_url() can process the file from original blob to target blob directly, it works much faster than using data = blob_from.download_blob() to store the file temporarily and then upload data to target blob.

Optimize the use of BigQuery resources to load 2 million JSON files from GCS using Google Dataflow

I have a vast database comprised of ~2.4 million JSON files that by themselves contain several records. I've created a simple apache-beam data pipeline (shown below) that follows these steps:
Read data from a GCS bucket using a glob pattern.
Extract records from JSON data.
Transform data: convert dictionaries to JSON strings, parse timestamps, others.
Write to BigQuery.
# Pipeline
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = save_main_session
p = beam.Pipeline(options=pipeline_options)
# Read
files = p | 'get_data' >> ReadFromText(files_pattern)
# Transform
output = (files
| 'extract_records' >> beam.ParDo(ExtractRecordsFn())
| 'transform_data' >> beam.ParDo(TransformDataFn()))
# Write
output | 'write_data' >> WriteToBigQuery(table=known_args.table,
create_disposition=beam.io.BigQueryDisposition.CREATE_NEVER,
write_disposition=beam.io.BigQueryDisposition.WRITE_EMPTY,
insert_retry_strategy='RETRY_ON_TRANSIENT_ERROR',
temp_file_format='NEWLINE_DELIMITED_JSON')
# Run
result = p.run()
result.wait_until_finish()
I've tested this pipeline with a minimal sample dataset and is working as expected. But I'm pretty doubtful regarding the optimal use of BigQuery resources and quotas. The batch load quotas are very restrictive, and due to the massive amount of files to parse and load, I want to know if I'm missing some settings that could guarantee the pipeline will respect the quotas and run optimally. I don't want to exceed the quotas as I am running other loads to BigQuery in the same project.
I haven't finished understanding some parameters of the WriteToBigQuery() transform, specifically batch_size, max_file_size, and max_files_per_bundle, or if they could help to optimize the load jobs to BigQuery. Could you help me with this?
Update
I'm not only concerned about BigQuery quotas, but GCP quotas of other resources used by this pipeline are also a matter of concern.
I tried to run my simple pipeline over the target data (~2.4 million files), but I'm receiving the following warning message:
Project [my-project] has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/16 instances, 1/16 CPUs, 250/2096 disk GB, 0/500 SSD disk GB, 1/99 instance groups, 1/49 managed instance groups, 1/99 instance templates, 1/0 in-use IP addresses. Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
I don't understand that message completely. The process activated 8 workers successfully and is using 8 from the 8 available in-use IP addresses. Is this a problem? How could I fix it?
If you're worried about load job quotas, you can try streaming data into bigquery that comes with a less restrictive quota policy.
To achieve what you want to do, you can try the Google provided templates or just refer to their code.
Cloud Storage Text to BigQuery (Stream) [code]
Cloud Storage Text to BigQuery (Batch)
And last but not the least, more detailed information can be found on the Google BigQuery I/O connector.

How to run multiple child runbooks against storage account

I have a powershell azure runbook that iterates through a large storage account and enforces file age policies on the blobs within the account. This runs fine but runs up against the Fair Share policy of 3 hours. I can use hybrid workers but I would prefer to run multiple child runbooks in parallel each handling a different portion of the blob account using the first letter prefix.
Example:
First child runbook runs A-M
Second: N-Z
Third: a-m
Fourth: m-z
I'm thinking of using a prefix variable within a loop that will iterate between letters.
## Declaring the variables
$number_of_days_bak_threshold = 15
$number_of_days_trn_threshold = 2
$current_date = get-date
$date_before_blobs_to_be_deleted_bak = $current_date.AddDays(-$number_of_days_bak_threshold)
$date_before_blobs_to_be_deleted_trn = $current_date.AddDays(-$number_of_days_trn_threshold)
# Number of blobs deleted
$blob_count_deleted = 0
# Storage account details
$storage_account_name = <Account Name>
$storage_account_key = <Account Key>
$container = <Container>
## Creating Storage context for Source, destination and log storage accounts
$context = New-AzureStorageContext -StorageAccountName $storage_account_name -StorageAccountKey $storage_account_key
$blob_list = Get-AzureStorageBlob -Context $context -Container $container
## Creating log file
$log_file = "log-"+(get-date).ToString().Replace('/','-').Replace(' ','-').Replace(':','-') + ".txt"
$local_log_file_path = $env:temp + "\" + "log-"+(get-date).ToString().Replace('/','-').Replace(' ','-').Replace(':','-') + ".txt"
write-host "Log file saved as: " $local_log_file_path -ForegroundColor Green
## Iterate through each blob
foreach($blob_iterator in $blob_list){
$blob_date = [datetime]$blob_iterator.LastModified.UtcDateTime
# Check if the blob's last modified date is less than the threshold date for deletion for trn files:
if($blob_iterator.Name -Match ".trn") {
if($blob_date -le $date_before_blobs_to_be_deleted_trn) {
Write-Output "-----------------------------------" | Out-File $local_log_file_path -Append
write-output "Purging blob from Storage: " $blob_iterator.name | Out-File $local_log_file_path -Append
write-output " " | Out-File $local_log_file_path -Append
write-output "Last Modified Date of the Blob: " $blob_date | Out-File $local_log_file_path -Append
Write-Output "-----------------------------------" | Out-File $local_log_file_path -Append
# Cmdle to delete the blob
Remove-AzureStorageBlob -Container $container -Blob $blob_iterator.Name -Context $context
$blob_count_deleted += 1
Write-Output "Deleted "$extn
}
}
Elseif($blob_iterator.Name -Match ".bak") {
if($blob_date -le $date_before_blobs_to_be_deleted_bak) {
Write-Output "-----------------------------------" | Out-File $local_log_file_path -Append
write-output "Purging blob from Storage: " $blob_iterator.name | Out-File $local_log_file_path -Append
write-output " " | Out-File $local_log_file_path -Append
write-output "Last Modified Date of the Blob: " $blob_date | Out-File $local_log_file_path -Append
Write-Output "-----------------------------------" | Out-File $local_log_file_path -Append
# Cmdle to delete the blob
Remove-AzureStorageBlob -Container $container -Blob $blob_iterator.Name -Context $context
$blob_count_deleted += 1
Write-Output "Deleted "$extn
}
}
Else{
Write-Error "Unable to determine file type." $blob_iterator.Name
}
}
Write-Output "Blobs deleted: " $blob_count_deleted | Out-File $local_log_file_path -Append
I expect to be able to run through the account in parallel.
So, I agree with #4c74356b41 that breaking the workload down is the best approach. However, that is itself, not always as simple as it might sound. Below I describe the various workarounds for fairshare and the potential issues I can think of off the top of my head. It as quite a lot of information, so here are the highlights:
Create jobs that do part of the work and then start the next job in the sequence.
Create jobs that all run on part of the sequence in parallel.
Create a runbook that does the work in parallel but also in a single job.
Use PowerShell Workflow with checkpoints so that your job is not subjected to fairshare.
Migrate the workload to use Azure Functions, e.g. Azure PowerShell functions.
TL;DR
No matter what, there are ways to breakup a sequential workload into sequentially executed jobs, e.g. each job works on a segment and then starts the next job as it's last operation. (Like a kind of recursion.) However, managing a sequential approach to correctly handle intermittent failures can add a lot of complexity.
If the workload can be broken down into smaller jobs that do not use a lot of resources, then you could do the work in parallel. In other words, if the amount of memory and socket resources required by each segment is low, and as long as there is no overlap or contention, this approach should run in parallel much faster. I also suspect that in parallel, the combined job minutes will still be less than the minutes necessary for a sequential approach.
There is one gotcha with processing the segments in parallel...
When a bunch of AA jobs belonging to the same account are started together, the tendency that they will all run within the same sandbox instance increases significantly. Sandboxes are never shared with un-related accounts, but because of improvements in job start performance, there is a preference to share sandboxes for jobs within the same account. When these jobs all run at the same time, there is an increased likelihood that the overall sandbox resource quota will be hit and then the sandbox will perform a hard exit immediately.
Because of this gotcha, if your workload is memory or socket intensive, you may want to have a parent runbook that controls the lifecycle (i.e. start rate) of the child runbooks. This has the twisted effect that the parent runbook could now hit the fairshare limit.
The next workaround is to implement runbooks that kick off the job for the next processing segment when they are completed. The best approach for this is to store the next segment somewhere the job can retrieve it, e.g. variable or blob. This way, if a job fails with its segment, as long as there is some way of making sure the jobs run until the entire workload finishes, everything will eventually finish. You might want to use a watcher task to verify eventual completion and handle retries. Once you get to this level of complexity, you can experiment to discover how much parallelism you can introduce without hitting resource limits.
There is no way for a job to monitor the available resource and throttle itself.
There is no way to force each job to run in its own sandbox.
Whether jobs run in the same sandbox is very non-deterministic, which can cause problems with hard to trace intermittent failures.
If you have no worry for hitting resource limits, you could consider using the ThreadJob module on the PowerShell Gallery. With this approach, you would still have a single runbook, but know you would be able to parallelize the workload within that runbook and complete the workload before hitting the fairshare limit. This can be very effective if the individual tasks are fast and lightweight. Otherwise, this approach may work for a little while but begin to fail if the workload increases in either the time or resources required.
Do not use PowerShell Jobs within an AA Job to achieve parallelism. This includes not using commands like Parallel-ForEach. There are a lot of examples for VM-Start/Stop runbooks that use PowerShell Jobs; this is not a recommend approach. PowerShell Jobs require a lot of resources to execute, so using PowerShell Jobs will significantly increase the resources used by you AA Job and the chance of hitting the memory quota.
You can get around the fairshare limitation by re-implementing you code as a Power Shell Workflow and performing frequent checkpoints. When a workflow job hits the fairshare limit, if it has been performing checkpoints, it will be restarted on another sandbox, resuming from the last checkpoint.
My recollection is your jobs need to perform a checkpoint at least once every 30 minutes. If they do this, that will resume from the fairshare without any penalty, forever. (At the cost of a tremendous number of job minutes.)
Even without a checkpoint, a workflow will get re-tried 2 times after hitting the checkpoint. Because of this, if your workflow code is idempotent, and quickly will skip previously completed work, by using a workflow, your job may complete (in 9 hours) even without checkpoints.
However, workflow are not just Power Shell script wrapped inside a workflow {} script block:
There are a lot of subtle differences in the way workflow function compared to scripts. Mastering these subtleties is difficult at best.
Workflow will not checkpoint all state during job execution. For example, the big one is that you need to write your workflow so that it will re-authenticate with Azure after each checkpoint, because credentials are not captured by the checkpoint.
I don't think anyone is going to claim debugging an AA job is an easy task. For workflow this task is even harder. Even with all the correct dependencies, how a workflow runs when executed locally is different from how it executes in the cloud.
Script run measurably faster than workflow.
Migrate the work into Azure Functions. With the recent release of PowerShell function, this might be relatively easy. Functions will have different limitations than those in Automation. This difference might suit your workload well, or be worse. I have not yet tried the Functions approach, so I can't really say.
The most obvious difference you will notice right away is that Functions is a lot more of a raw, DevOps oriented service than Automation. This is partly because Automation is a more mature product. (Automation was probably the first widely available serverless service, having launched roughly a year before Lambda.) Automation was purpose built for automating the management of cloud resources, and automation is the driving factor in the feature selection. Whereas Functions is a much more general purpose serverless operations approach. Anyway, one obvious difference at the moment is Functions does not have any built-in support for things like the RunAs account or Variables. I expect Functions will improve in this specific aspect over time, but right now it is pretty basic for automation tasks.

Reduce Service Fabric backup size

I'm trying to use Service Fabric backups with Actors:
var backupDescription = new BackupDescription(BackupOption.Full, BackupCallbackAsync);
await BackupAsync(backupDescription, TimeSpan.FromHours(1), cancellationToken);
But I've noticed that one backup file may contains several files like:
edb0000036A.log 5120 KB
edb0000036B.log 5120 KB
edb00000366.log 5120 KB
...
I haven't found any info about these files but it seems that they are just logs and I may not include them. Am I right or these files must be included in backup?
These files are quite heavy so I'm trying to reduce size of backups
UPDATE 1:
I have tried to use incremental backup. But it seems that Actors do not support Incremental backup as I have read on MSDN. Moreover I have tested but got Exception "Invalid backup option. Parameter name: option"
Instead of doing full backups every hour, you can also use incremental backups, which will result in a smaller size. (For example, do a full backup every day, and incrementals every hour for instance)
The log files are transaction logs, they are not optional for restore. More info here.

Resources