In Azure Pipelines, we can upload a file to blob storage using the following task:
- task: AzureFileCopy#4
inputs:
SourcePath: 'MyInstaller.tar.gz'
azureSubscription: 'Azure subscription 1(qwerty)'
Destination: 'AzureBlob'
storage: 'qwertyuiop'
ContainerName: 'qwertyuiop'
I noticed that if the file name is the same, then the file gets overwritten in the container, which is useful, because I can just make my live release pipeline overwrite the file so that users always get the latest version of my software.
However, what happens if a user is currently in the middle of downloading the file and it gets overwritten with a new version? Will the download fail? Is there any documentation from Microsoft on this?
For block blobs, download may be the older version until the final block commit. Final PutBlockList operation will atomically update the blob to the new version.
For large blobs it may take time to complete the process and so if there is no version change it downloads as is.When there is version change , if any type of concurrency is not set, it will overwrite the data parallely by default and if the version changes and the compatibility doesn’t match there may be internal errors occurring else it downloads.
Please have a look at the reference document on managing concurrency in azure storage which says overwiting depends on access condition.
The Storage service assigns an identifier to every object stored which
gets updated every time any change or update happens on that object.
The identifier is returned to the client as part of an HTTP GET
response using the ETag (entity tag) header that is defined within the
HTTP protocol. A user performing an update on such an object can send
in the original ETag along with a conditional header to ensure that an
update will only occur if a certain condition has been met . The
condition is in the form of “If-Match” header which requires to match
Etag in update request as in the Storage Service .If certain
conditions doesn’t match , the process may get interrupted and return
errors .
Reference:
azure-blob-availability-during-an-overwrite-SO
Related
After realizing I was being charged for storage as a result of Google Cloud Functions deployments, I read this thread and created a 3-day deletion rule for my us.artifacts.{myproject}.appspot.com folder. Now I am trying to deploy an existing function and am getting the below. How can I resolve this? Should I delete the whole image folder?
[0mfailed to export: failed to write image to the following tags: [us.gcr.io/myproject/gcf/us-central1/3a36a5e8-92b5-426e-b230-ba19ffc92ba8:MYFUNCTION_version-64:
GET https://storage.googleapis.com/us.artifacts.myproject.appspot.com/containers/images/sha256:{some long string}?access_token=REDACTED:
unsupported status code 404; body: <?xml version='1.0' encoding='UTF-8'?>
<Error><Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message><Details>No such object: us.artifacts.myproject.appspot.com/containers/images/sha256:{some long string}</Details></Error>]
Edit 1: My deploy command (which has been previously working for months):
gcloud functions deploy MYFUNCTIONNAME --source https://source.developers.google.com/projects/MYPROJECT/repos/MYREPO --trigger-http --runtime nodejs10 --allow-unauthenticated
Edit 2: I have an separate cloud function that points to the exact same source repository (but is located in europe-west3) and it updated fine without issue. However, this function was last updated in December while the failing function was last updated 2 days ago.
Edit 3: Well, in the end I just duplicated the Cloud Function and I am able to update and deploy the new one without issue. I retained the 3 day deletion for the container and this and other functions are updating without issue as well. No idea why this original function kept getting this error.
As is recommended in the answer of the another question, it is better delete the whole bucket, this action will destroy all elements and configurations related to this bucket, avoiding issues between Functions, Storage & Container Registry, if you delete only the containers some configurations will be remained affecting further deploys.
I'm new to using Azure DevOps. I continue to receive this error "Add-PnPApp : The request message was already sent. Cannot send the same request message multiple times."
Azure DevOps Release fails because of AddPnP error with "...same request".
Build shows version that changes my version (old) to a new version(gulp's version?).
Image of build
I'm told that it could be the version that starts with zero because SharePoint doesn't like it. I can't seem to change the new version to 1.0.0.1 because it seems like it's being changed in the gulp-file.js. Is there something else that I am missing?
image of release
Is it possible that you need the overwrite parameter in the AddPnP command? Or is it possible you would need to iterate the version between each release?
https://learn.microsoft.com/en-us/powershell/module/sharepoint-pnp/add-pnpapp?view=sharepoint-ps
In my case the same message
The request message was already sent. Cannot send the same request message multiple times
was misleading. I tried to update dynamically an .sppkg package (which is in fact a ZIP file) during an automated deployment, but the file _rels/.rels was getting lost in the process (because of the bug Unable to compress hidden files with Compress-Archive) and the resulting package was corrupted.
Once I fixed the package by making sure the file _rels/.rels was kept, the deployment would succeed.
I am using java to write a Azure Function App which is eventgrid trigger and the trigger is blobcreated. So when ever blob is created it will be trigerred and the function is to copy a blob from one container to another. I am using startCopy function from com.microsoft.azure.storage.blob. It was working fine but sometimes, It uses to copy files of zero bytes which are actually containing some data in source location. So at destination sometimes it dumps zero bytes of files. I would like to have a little help on this so that I could understand how to possibly handle this situation
CloudBlockBlob cloudBlockBlob = container.getBlockBlobReference(blobFileName);
CloudStorageAccount storageAccountdest = CloudStorageAccount.parse("something");
CloudBlobClient blobClientdest = storageAccountdest.createCloudBlobClient();
CloudBlobContainer destcontainer = blobClientdest.getContainerReference("something");
CloudBlockBlob destcloudBlockBlob = destcontainer.getBlockBlobReference();
destcloudBlockBlob.startCopy(cloudBlockBlob);
Copying blobs across storage accounts is an async operation. When you call startCopy method, it just signals Azure Storage to copy a file. Actual file copy operation happens asynchronously and may take some time depending how how large file you're transferring.
I would suggest that you check the copy operation progress on the target blob to see how many bytes have been copied and if there's a failure in the copy operation. You can do so by fetching the properties of the target blob. A copy operation could potentially fail if the source blob is modified after the copy operation has started by Azure Storage.
had the same problem, and later figured out from the docs
Event Grid isn't a data pipeline, and doesn't deliver the actual
object that was updated
Event grid will tell you that something has changed and that the actual message has a size limit and as long as the data that you are copying is within that limit it will be successful if not it will be 0 bytes. I was able to copy upto 1mb and beyond that it resulted 0 bytes. You can try and see if azure has increased by size limit in the recent.
However if you want to copy the complete data then you need to use Event Hub or Service Bus. For mine, I went with service bus.
Azure Functions bug. I get the error in the portal
Error:
We are not able to retrieve the runtime master key. Please try again later.
Session Id: d13fceebd4ea4cb1b7fb3d3829dd1406
Timestamp: 2017-08-24T20:04:23.555Z
I've tried all of the suggestions here:
https://blogs.msdn.microsoft.com/jpsanders/2017/05/09/function-app-error-we-are-not-able-to-retrieve-the-runtime-master-key/
I'm using the runtime version 1.0.10917 but I've tried ~1 and get the same result.
This seems to occur when I delete the function from the portal and then recreate it. It consistently happens after that for every function we have. The first time the function is created, it seems to work.
This is the exception you're hitting
System.Security.Cryptography.CryptographicException : The payload was invalid.
at Microsoft.AspNetCore.DataProtection.Cng.CbcAuthenticatedEncryptor.DecryptImpl(Byte* pbCiphertext,UInt32 cbCiphertext,Byte* pbAdditionalAuthenticatedData,UInt32 cbAdditionalAuthenticatedData)
at Microsoft.AspNetCore.DataProtection.Cng.Internal.CngAuthenticatedEncryptorBase.Decrypt(ArraySegment`1 ciphertext,ArraySegment`1 additionalAuthenticatedData)
at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData,Boolean allowOperationsOnRevokedKeys,UnprotectStatus& status)
at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData,Boolean ignoreRevocationErrors,Boolean& requiresMigration,Boolean& wasRevoked)
at Microsoft.Azure.WebJobs.Script.WebHost.DataProtectionKeyValueConverter.Unprotect(Key key) at C:\azure-webjobs-sdk-script\src\WebJobs.Script.WebHost\Security\DataProtectionKeyValueConverter.cs : 43
at Microsoft.Azure.WebJobs.Script.WebHost.SecretManager.ReadHostSecrets(HostSecrets hostSecrets) at C:\azure-webjobs-sdk-script\src\WebJobs.Script.WebHost\Security\SecretManager.cs : 383
at async Microsoft.Azure.WebJobs.Script.WebHost.SecretManager.GetHostSecretsAsync() at C:\azure-webjobs-sdk-script\src\WebJobs.Script.WebHost\Security\SecretManager.cs : 83
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at async Microsoft.Azure.WebJobs.Script.WebHost.WebJobsSdkExtensionHookProvider.GetOrCreateExtensionKey(String extensionName) at C:\azure-webjobs-sdk-script\src\WebJobs.Script.WebHost\WebHooks\WebJobsSdkExtensionHookProvider.cs : 71
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Azure.WebJobs.Script.WebHost.WebJobsSdkExtensionHookProvider.GetExtensionWebHookRoute(String extensionName) at C:\azure-webjobs-sdk-script\src\WebJobs.Script.WebHost\WebHooks\WebJobsSdkExtensionHookProvider.cs : 64
at Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridExtensionConfig.Initialize(ExtensionConfigContext context)
at Microsoft.Azure.WebJobs.Host.Executors.JobHostConfigurationExtensions.InvokeExtensionConfigProviders(ExtensionConfigContext context)
at Microsoft.Azure.WebJobs.Host.Executors.JobHostConfigurationExtensions.CreateStaticServices(JobHostConfiguration config)
at Microsoft.Azure.WebJobs.JobHost.PopulateStaticServices()
at Microsoft.Azure.WebJobs.Script.Utility.CreateMetadataProvider(JobHost host) at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Utility.cs : 362
at Microsoft.Azure.WebJobs.Script.ScriptHost.LoadCustomExtensions() at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Host\ScriptHost.cs : 670
at Microsoft.Azure.WebJobs.Script.ScriptHost.Initialize() at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Host\ScriptHost.cs : 510
at Microsoft.Azure.WebJobs.Script.ScriptHost.Create(IScriptHostEnvironment environment,IScriptEventManager eventManager,ScriptHostConfiguration scriptConfig,ScriptSettingsManager settingsManager) at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Host\ScriptHost.cs : 937
It's very hard for users to discover these errors for their app because the runtime doesn't log them anywhere the UX can query.
This issue is tracking the exception: https://github.com/Azure/azure-webjobs-sdk-script/issues/1832
We are still not entirely sure why this is happening. Did you republish your keys from another application by any chance? (edit: or delete and recreate the app with the same name) these keys are encrypted by a function app specific key, and won't work outside that context.
fix: copied from Fabio Cavalcante
can you delete (or rename) the host.json file in
d:\home\data\Functions\secrets\ and retry? This will force the runtime
to re-create those secrets for that environment. Keep in mind that
this would also change your master and default keys.
Switching the Azure Function storage account to a new storage account can trigger a similar issue.
Error: We are not able to get the key swaggerdocumentationkey.
Please check the runtime logs for any errors or try again later.
This is likely related to the storage account being uninitialized with the new site. I can't confirm the behavior, but it seems that when switching storage accounts the new account doesn't initialize a new site and will cause similar issues.
i deleted the azure functions and renamed it. After that it started working fine. Looks like even when you delete the azure functions, traces are there in azure logs and screwing up something.
Also make sure you keep separate azure blob storage for every functions, if you use it in more than one azure function, then it is giving some weird errors. I think they are using it for logging and authentication purposes.
I have an ASP.NET app being developed for Windows Azure. It's been deemed necessary that we use sharding for the DB to improve write times since the app is very write heavy but the data is easily isolated. However, I need to keep track of a few central variables across all instances, and I'm not sure the best place to store that info. What are my options?
Requirements:
Must be durable, can survive instance reboots
Must be synchronized. It's incredibly important to avoid conflicting updates or at least throw an exception in such cases, rather than overwriting values or failing silently.
Must be reasonably fast (2000+ read/writes per second
I thought about writing a separate component to run on a worker role that simply reads/writes the values in memory and flushes them to disk every so often, but I figure there's got to be something already written for that purpose that I can appropriate in Windows Azure.
I think what I'm looking for is a system like Apache ZooKeeper, but I dont' want to have to deal with installing the JRE during the worker role startup and all that jazz.
Edit: Based on the suggestion below, I'm trying to use Azure Table Storage using the following code:
var context = table.ServiceClient.GetTableServiceContext();
var item = context.CreateQuery<OfferDataItemTableEntity>(table.Name)
.Where(x => x.PartitionKey == Name).FirstOrDefault();
if (item == null)
{
item = new OfferDataItemTableEntity(Name);
context.AddObject(table.Name, item);
}
if (item.Allocated < Quantity)
{
allocated = ++item.Allocated;
context.UpdateObject(item);
context.SaveChanges();
return true;
}
However, the context.UpdateObject(item) call fails with The context is not currently tracking the entity. Doesn't querying the context for the item initially add it to the context tracking mechanism?
Have you looked into SQL Azure Federations? It seems like exactly what you're looking for:
sharding for SQL Azure.
Here are a few links to read:
http://msdn.microsoft.com/en-us/library/windowsazure/hh597452.aspx
http://convective.wordpress.com/2012/03/05/introduction-to-sql-azure-federations/
http://searchcloudapplications.techtarget.com/tip/Tips-for-deploying-SQL-Azure-Federations
What you need is Table Storage since it matches all your requirements:
Durable: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
Synchronized: Yes, Table Storage is part of a Storage Account, which isn't related to a specific Cloud Service or instance.
It's incredibly important to avoid conflicting updates: Yes, this is possible with the use of ETags
Reasonably fast? Very fast, up to 20,000 entities/messages/blobs per second
Update:
Here is some sample code that uses the new storage SDK (2.0):
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
var table = storageAccount.CreateCloudTableClient()
.GetTableReference("Records");
table.CreateIfNotExists();
// Add item.
table.Execute(TableOperation.Insert(new MyEntity() { PartitionKey = "", RowKey ="123456", Customer = "Sandrino" }));
var user1record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
var user2record = table.Execute(TableOperation.Retrieve<MyEntity>("", "123456")).Result as MyEntity;
user1record.Customer = "Steve";
table.Execute(TableOperation.Replace(user1record));
user2record.Customer = "John";
table.Execute(TableOperation.Replace(user2record));
First it adds the item 123456.
Then I'm simulating 2 users getting that same record (imagine they both opened a page displaying the record).
User 1 is fast and updates the item. This works.
User 2 still had the window open. This means he's working on an old version of the item. He updates the old item and tries to save it. This causes the following exception (this is possible because the SDK matches the ETag):
The remote server returned an error: (412) Precondition Failed.
I ended up with a hybrid cache / table storage solution. All instances track the variable via Azure caching, while the first instance spins up a timer that saves the value to table storage once per second. On startup, the cache variable is initialized with the value saved to table storage, if available.