How to log all incompaitable rows in storage account using ADF V2 copy data tools - azure

I have selected the option of logging all the incompatible rows into the storage account default container, but there have been no logs written inside the storage account, I am wondering why is that not happening?
Is there anything which can be done to make this work?

It's a regression and we are working on the fix, it's expected to be deployed by end of this week. Please try after that.
Update:
The issue is fixed, can you try again?

Related

Additional column throwing validation issue with Azure SQL data sink in Azure Data Factory

Validation Error
I've got this weird issue where validation fails on 'additional columns' for my data sink to Azure SQL coming from a blob storage source in the Azure Data Factory GUI. No matter how many times we recreate the dataset (or specify another dataset, new) we can't get past this validation issue.
The irony of this is we deploy these pipelines from code and when we run them, we get no errors at all. This issue we have had just made life really difficult developing pipelines further as we have to do everything by code. We cant use the pipepline publish option.
Here are some screen grabs for you of the pipeline so you can see the flow.
Pipeline
Inside copyCustomer.
Source
Mapping
Sink
Any ideas on how to fix this validation would be greatly appreciated.
For what it's worth, we have recreated the dataset multiple times (clone and new) to avoid any issue with the dataset model not being the latest as per what's documented here https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
Sometimes by setting the table in sink to autocreate has shown the validation to be 'fixed' but then when we go to publish it errors out again.
When your Azure SQL dataset was created long time before and is still utilizing an outdated dataset model that Additional Columns do not support, this is expected behavior.
As per official Microsoft documentation
To resolve this issue, you can just follow the error message to create a new Azure SQL dataset and use this as copy sink.
I followed error message and created new data set and it is working fine for me.
Source:
Mapping:
Sink:
Output:
I suspect here, your dataset of Sink type is incorrect. I reproduced,
same at my end. Its working fine. Kindly make sure you create a sink dataset type with Azure SQL database type connector only.
Please check below screenshots from my implementation.
If still it helps, feel free to share your sink dataset type connector details along with screenshots.

Setting desktop backgrounds via Azure Storage Containers using PowerShell not working

I'm trying to set desktop backgrounds for our users from Azure Storage using a PS script I'm hoping to deploy.
This script
It just isn't working!
For whatever reason, the script just stops calling at a certain point (line 142 onwards) and I can't work out why.
I've added Write-Hosts at certain points to see where it's getting to and it seems to be that return $Bloblist which doesn't run or go any further.
I can see from the logs that it's showing 'Adding content file from Azure Storage Blob to return list: Background-01.png' etc., but that's it. This tells me that is not a permission error as it is looking where it needs to.
I just can't work out why it's not returning that Bloblist.
If anyone could help that'd be super, as we need to change the desktop backgrounds for every user in the company due to a company name change.
Thanks a lot!
The problem is (probably) not with the script. The problem is likely due to a setup misstep.
You need to debug the code and find out where it's failing to do what you expect.
You could try this or if videos are your thing you could try this.

Automating Snowpipe for Microsoft Azure Blob Storage - error: Queue not found for channel

I have been trying to set up a snowpipe to ingest data from blob storage in Azure into snowflake, following this guide, I think I have done everything correctly although I am new to azure and snowflake so may have missed something obvious. Everything seems to have been set up correctly on both sides, but whenever I check the pipe status using SELECT SYSTEM$PIPE_STATUS('azure_pipe');, I get the following:
{"executionState":"RUNNING","pendingFileCount":0,"notificationChannelName":"https://snowflakedata.queue.core.windows.net/snowflakequeue","numOutstandingMessagesOnChannel":2,"lastReceivedMessageTimestamp":"2022-02-18T13:25:12.107Z","channelErrorMessage":"downloadAttributes error:Queue not found for channel Name=https://snowflakedata2.queue.core.windows.net/snowflakequeue, AccountId=6713, NotificationChannelID=2045, IntegrationID=1784764","lastErrorRecordTimestamp":"2022-02-18T17:32:47.854Z"}
I'm not sure what I have done wrong, the snowflake app has the queue contributor role in azure and I'm fairly sure I set everything else up correctly. If anyone could point me in the right direction as to how to troubleshoot this that would be really helpful!
I had the same issue as you did just this week when trying to create a Snowpipe for Azure. Using SELECT SYSTEM$PIPE_STATUS('azure_pipe'); gave the exact same error message as you have shown above. Thankfully, Snowflake Support has provided me with the answer and an explanation.
Answer:
Drop all of the objects relating to the Snowpipe (integrations, pipe, stage, etc). Then recreate them in the exact order and specification as shown in this documentation.
Explanation:
The issue for me was caused because I kept using create or replace on the objects when I was modifying them (eg changing the comment on a pipe). This re-created the object and broke the links between the objects in the Snowpipe and prevented the Snowpipe from working as intended. Dropping and starting again solved it for me.

Azure - cannot collect custom logs using Log Analytics

My goal is to collect some custom logs in Azure Monitor from an external VM running on Linux. In that regard, I've installed the log analytics agent according to the MS official documentation, I ran the wizard in order to setup a custom log - that includes a sample file, a row delimiter and a location from where to collect the logs. However, I'm getting a warning message saying:
Two successive configuration applications from OMS Settings failed – please report issue to github.com/Microsoft/PowerShell-DSC-for-Linux/issues (1)
Tried to follow the link proposed that points to Github where I wasn't able to find any solution (nor on any other link) on this and that's why I said to give it a change and ask the community in here.
Though, it is weird that the heartbeat of the machine or manual syslogs messages are being collected except for the custom logs.
Has anyone encountered this and managed to overpass it? Thanks
Apparently, according to the MS answer, the above warning message is normal to be displayed. However, the reason for not collecting the logs was that in the target file that has to be processed by the oms agent, you need to keep appending new entries because this triggers the oms agent which compare and check if the file has new entries than at the last check.
Hope will help someone!

Application Insights showing in Live metrics, but not in log analytics

So I created a Class library that configures microservices's Application Insights. These are the Application Insights configuration methods:
And this is how I use them in the Startup.cs class of each microservice:
It seems to be working fine, according to the debug logs:
I am even able to see the live metrics in azure correctly:
However, the data does not get 'saved' in Log Analytics, so I cannot run any query on the data (it always returns 0 results).
I thought that this could probably be because of the following code in the Program.cs of each microservice:
I thought that maybe the application insights logger was not configured correctly (not explicitely setting the instrumentation key). However, I tried to set the instrumentation key of the TelemetryConfiguration to the hardcoded Instrumentation Key, but it did not do anything.
I did read through this documentation: https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core
But I do not seem like I am missing any step (I call AddApplicationInsights and AddApplicationInsightsKubernetesEnricher).
I was wondering if anyone could see the issue, I really do not know how the data could be seen by live metrics and not Log Analytics.
Note: I did try to add application insights before implementing the library with the exact same code (the default code that VS generates when you right click on project > Setup Application Insights and it worked perfectly.
Many many thanks!
Here is the summarization for others who may also have the same issue:
As per Dmitry mentioned, the correct format in vs output should be Microsoft.ApplicationInsights.Dev.GUID, not Microsoft.ApplicationInsights.Dev.instrumenationkey=GUID. Here is the screenshot for the correct one:
And as per op's comment, in keyvault, just copy pasted the connection string in the keyvault instead of the instrumentation key which causes this error. The solution is change it to instrumentation key directly in keyvault.
I had this same issue and it was because my instrumentation key had an extra white space that I didn't see, so be sure to double check it. I'm still not 100% sure how data was coming in at all via livestream. I guess that just works without an instrumentation key.

Resources