Consistent updates with Powershell to Azure table storage - azure

Azure storage tables implement Etag which I would like to use to confirm that I am updating the latest entry to avoid overwriting my changes when there are multiple updates happening. However, I cannot figure out how can I use this with the AzTable PS module as it seems to just overwrite and does not care about the Etag. Can I somehow make AzTable module to use Etag or any other way to achieve this in Powershell?
I am currently following https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-how-to-use-powershell and bumped into this problem.

I have found out that the easiest way to achieve this is to change the 'InsertOrMerge' to 'Merge' in the 'Update-AzTableRow'-function https://github.com/paulomarquesc/AzureRmStorageTable/blob/master/AzureRmStorageTableCoreHelper.psm1#L696

Related

Automating Snowpipe for Microsoft Azure Blob Storage - error: Queue not found for channel

I have been trying to set up a snowpipe to ingest data from blob storage in Azure into snowflake, following this guide, I think I have done everything correctly although I am new to azure and snowflake so may have missed something obvious. Everything seems to have been set up correctly on both sides, but whenever I check the pipe status using SELECT SYSTEM$PIPE_STATUS('azure_pipe');, I get the following:
{"executionState":"RUNNING","pendingFileCount":0,"notificationChannelName":"https://snowflakedata.queue.core.windows.net/snowflakequeue","numOutstandingMessagesOnChannel":2,"lastReceivedMessageTimestamp":"2022-02-18T13:25:12.107Z","channelErrorMessage":"downloadAttributes error:Queue not found for channel Name=https://snowflakedata2.queue.core.windows.net/snowflakequeue, AccountId=6713, NotificationChannelID=2045, IntegrationID=1784764","lastErrorRecordTimestamp":"2022-02-18T17:32:47.854Z"}
I'm not sure what I have done wrong, the snowflake app has the queue contributor role in azure and I'm fairly sure I set everything else up correctly. If anyone could point me in the right direction as to how to troubleshoot this that would be really helpful!
I had the same issue as you did just this week when trying to create a Snowpipe for Azure. Using SELECT SYSTEM$PIPE_STATUS('azure_pipe'); gave the exact same error message as you have shown above. Thankfully, Snowflake Support has provided me with the answer and an explanation.
Answer:
Drop all of the objects relating to the Snowpipe (integrations, pipe, stage, etc). Then recreate them in the exact order and specification as shown in this documentation.
Explanation:
The issue for me was caused because I kept using create or replace on the objects when I was modifying them (eg changing the comment on a pipe). This re-created the object and broke the links between the objects in the Snowpipe and prevented the Snowpipe from working as intended. Dropping and starting again solved it for me.

Kentico Azure Search - Auto Rebuild Task Failure

I have setup Azure search on my Kentico site. The search works fine.
When I try to setup automatic index build it fails.
So, it's never rebuilding the indexes. How can I fix this issues?
The error message suggests that either there are two columns with the same name or perhaps the case of a column name has changed somehow and Azure is struggling with that. I'd suggest dropping the index in the Azure portal and seeing if the automatic indexing will recreate it.
If you can go into the portal, it's worth checking the index definition to confirm that the fields are as you expect.

How to log all incompaitable rows in storage account using ADF V2 copy data tools

I have selected the option of logging all the incompatible rows into the storage account default container, but there have been no logs written inside the storage account, I am wondering why is that not happening?
Is there anything which can be done to make this work?
It's a regression and we are working on the fix, it's expected to be deployed by end of this week. Please try after that.
Update:
The issue is fixed, can you try again?

Force update Diagnostic Configuration file under wad-control-container for Azure

I would like to update the Diagnostic configuration file for the azure roles whenever I upgrade my deployment. How can I do this automatically?
From time to time, we do change our diagnostic (using code) - and upgrade the service. But whenever we upgrade the service, it is still using the old diagnostic configuration and we do not see any new logs we have configured using new code.
How can I achieve this so that whenever I upgrade my deployment, it upgrades the diagnostic configuration as well.
I wonder if you have a bug in your diagnostics updating code. If each role ran code in OnStart or Run to configure diagnostics on startup, there would be no reason that your instances wouldn't be properly configured. I tend to think that imperative code that configures diagnostics is inherently a bad idea in the long run, but it should still work. If you share the code, maybe I can spot an issue.
The best** way I have found to update and enforce configuration is to use the diagnostics.wadcfg file and update it. When you upgrade your deployment, it will use those settings if you have not overridden it in code somewhere. Contrary to Microsoft's guidance at that link, it should be the preferred method as opposed to code which must be maintained and is orthogonal to your application's purpose. Said another way - a declarative configuration file that your ops team can maintain over writing code is usually a better idea. To use this, just include it in your deployment as content and delete any existing files in wad-control-container (and remove any code that configured diagnostics). It will just configure itself from that file then when you next upgrade.
** you can also using a 3rd party SaaS monitoring to set and maintain your diagnostics config. I work on one such one, but I am guessing you want to know how to do it yourself. :)

Windows Azure Table: C# API for Update/Merge?

Windows Azure Table has two distinct mechanisms for altering an existing entity: Update, which modifies properties in place, and Merge which replaces the entire entity.
Which of these is used when you call TableServiceContext.UpdateObject()? (I'm guessing Update.) And is the other one exposed at all through this API?
(Apologies if this is right under my nose in the docs and I'm not seeing it.)
Actually, it's Merge that modifies properties in place, and Update that replaces the entire entity.
I believe the storage client library does a merge by default, but I think you can use SaveChangeOptions.UpdateAsReplace to modify this behavior.
An easy way to test/verify this is to run a debugging proxy like Fiddler and just see what happens over the wire.

Resources