Write Azure Table Storage - Different behaviour local and cloud - azure

I've a simple Azure function that writes periodically some data into an Azure Table Storage.
var storageAccount = new CloudStorageAccount(new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials("mystorage","xxxxx"),true);
var tableClient = storageAccount.CreateCloudTableClient();
myTable = tableClient.GetTableReference("myData");
TableOperation insertOperation = TableOperation.Insert(data);
myTable.ExecuteAsync(insertOperation);
The code runs well locally in Visual Studio and all data is written correctly into the Azure located Table Storage.
But if I deploy this code 1:1 into Azure as an Azure function, the code also runs well without any exception and logging shows, it runs through every line of code.
But no data is written in the Table Storage - same name, same credentials, same code.
Is Azure blocking this connection (AzureFunc in Azure > Azure Table Storage) in some way in contrast to "Local AzureFunc > Azure Table Storage)?

Is Azure blocking this connection (AzureFunc in Azure > Azure Table
Storage) in some way in contrast to "Local AzureFunc > Azure Table
Storage)?
No, it's not azure which is blocking the connection or anything of that sort.
You have to await the table operation you are doing with ExecuteAsync as the control in program is moving without that method being completed. Change your last line of code to
await myTable.ExecuteAsync(insertOperation);
Take a look how here on Because this call is not awaited, the current method continues to run before the call is completed.

The problem was the rowkey:
I used DateTime.Now for the rowkey (since autoincrement values are not provided by table storage).
And my local format was "1.1.2019 18:19:20" while the server's format was "1/1/2019 ..."
And "/" seems not to be allowed in the rowkey string.
Now, formatting the DateTime string correct everything works fine.

Related

Azure Data factory and Power Query

I am using Azure Data Factory and Power Query.
I have setup a linked service and dataset that successfully connects to an Azure Gen-2 Data Lake storage account.
When I create a new power query and set it to the dataset the following message is returned...
The word 'undefined' is getting set in the power query call to the Az storage account and not the account url - when I manually paste the actual Az storage account url the power query works and returns the data however when I save the power the query goes back to 'undefined'
let
AdfDoc = AzureStorage.DataLakeContents("**undefined**/data-lake/CovidLoad/Report.csv"),
Csv = Csv.Document(AdfDoc, [Delimiter = ",", Encoding = TextEncoding.Utf8, QuoteStyle = QuoteStyle.Csv]),
PromotedHeaders = Table.PromoteHeaders(Csv, [PromoteAllScalars = true])
in
PromotedHeaders
Any idea how to fix this issue?
When I set the linked service to connection to the storage account using the account-url and account- key the power-query works OK - it returns data

CosmosDB How to read replicated data

I'm using CosmosDB and replicating the data globally. (One Write region; multiple Read regions). Using the Portal's Data Explorer, I can see the data in the Write region. How can I query data in the Read regions? I'd like some assurance that it's actually working, and haven't been able to find any info or even an URL for the replicated DBs.
Note: I'm writing to the DB via the CosmosDB "Create or update document" Connector in a Logic App. Given that this is a codeless environment, I'd prefer to validate the replication without having to write code.
How can I query data in the Read regions?
If code is possible, we could access from every region your application is deployed, configure the corresponding preferred regions list for each region via one of the supported SDKs
The following is the demo code for Azure SQL API CosmosDB. For more information, please refer to this tutorial.
ConnectionPolicy usConnectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
};
usConnectionPolicy.PreferredLocations.Add(LocationNames.WestUS); //first preference
usConnectionPolicy.PreferredLocations.Add(LocationNames.NorthEurope); //second preference
DocumentClient usClient = new DocumentClient(
new Uri("https://contosodb.documents.azure.com"),
"<Fill your Cosmos DB account's AuthorizationKey>",
usConnectionPolicy);
Update:
We can enable Automatic Failover from Azure portal. Then we could drag and drop the read regions items to recorder the failover priorties.

Trigger from local sql database to azure sql db

I have a local and an Azure database.
When a row is inserted in the local db, I would like it to insert it in the Azure one. I have created a trigger on the local db for that task:
USE [db_local]
Create trigger [dbo].[trigInverse] on [dbo].[db_local]
after insert
as
begin
insert into [serverazure.DATABASE.WINDOWS.NET].[db_azure].[Access].[Companies]([CompanyName])
select NAME
from inserted;
end
However, when I try to insert a row the error in picture1 appears
I cannot see what the parameter is, and how to set a correct value in that error message.
I did some tests to find the cause: I put that trigger between 2 local db, and tried adding rows to the source db, and it works
In the linked server node, these are the settings
You can use Azure SQL Data Sync, download and install in your local server SQL Azure Data Sync Agent
Then setup your azure data base like this:
Getting Started with Azure SQL Data Sync
It will sync your databases every 5 minutes.

Single Azure Function to Trigger when upload a file in blob then insert the blob name in Azure SQL

I'm just new to Azure Function ,I just gone through the TimerTrigger for Sql Connection and BlobTrigger for Azure Blob.I tired with the demo works fine for me.
Next i tried to do with combination of both this.
When a file uploaded/Added in the Specific Blob Container.I should write the blob name in my Azure SQL Database table.
How could i achieve this in a Single Azure function ?
Also i'm having a doubt that if we create a azure function for a blob trigger,then this function will always be running in the background ? I mean it will consume the Background running cost ?
I'm thinking that azure function for a blob trigger will consume the cost only
during it's run. Isn't it ?
Could somebody help me with this
How could i achieve this in a Single Azure function ?
You could achieve it using blob trigger. You will get blob name from the function parameter [name]. Then you could save this value to your Azure SQL database. Sample code below is for your reference.
public static void Run(Stream myBlob, string name, TraceWriter log)
{
var str = "connection string of your sql server";
using (SqlConnection conn = new SqlConnection(str))
{
conn.Open();
var text = "insert into mytable(id, blobname) values(#id, #blobname)";
using (SqlCommand cmd = new SqlCommand(text, conn))
{
cmd.Parameters.AddWithValue("id", 1);
cmd.Parameters.AddWithValue("blobname", name);
// Execute the command and log the # rows affected.
var rows = cmd.ExecuteNonQuery();
log.Info($"{rows} rows were updated");
}
}
}
I'm thinking that azure function for a blob trigger will consume the cost only during it's run. Isn't it?
You will need to choose hosting plan when creating an Azure function.
If you choose App Service Plan, you will need to pay for the App Service Plan which is depends on the tier you chosen. If you choose Consumption plan, your function is billed based on two things. Resource consumption and executions.
Resource consumption is calculated by multiplying average memory size in Gigabytes by the time in seconds it takes to execute the function. You need to pay for the CPU and Memory consumed by your function. Executions means the requests count which are handled by your function. Please note that Consumption plan pricing also includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.
We can also call the Sp (like Exec spname) in the place of Insert Command?Right ?
Yes, we could call the sp by setting CommandType to StoredProcedure. Code below is for your reference.
using (SqlCommand cmd = new SqlCommand("StoredProcedure Name", conn))
{
cmd.CommandType = System.Data.CommandType.StoredProcedure;
}
Sure, you should use Blob Trigger for your scenario.
If you use Consumption Plan, you will only be changed per event execution. No background cost will apply.

How to do Azure Blob storage and Azure SQL Db atomic transaction

We have a Blob storage container in Azure for uploading application specific documents and we have Azure Sql Db where meta data for particular files are saved during the file upload process. This upload process needs to be consistent so that we should not have files in the storage for which there is no record of meta data in Sql Db and vice versa.
We are uploading list of files which we get from front-end as multi-part HttpContent. From Web Api controller we call the upload service passing the httpContent, file names and a folder path where the files will be uploaded. The Web Api controller, service method, repository, all are asyn.
var files = await this.uploadService.UploadFiles(httpContent, fileNames, pathName);
Here is the service method:
public async Task<List<FileUploadModel>> UploadFiles(HttpContent httpContent, List<string> fileNames, string folderPath)
{
var blobUploadProvider = this.Container.Resolve<UploadProvider>(
new DependencyOverride<UploadProviderModel>(new UploadProviderModel(fileNames, folderPath)));
var list = await httpContent.ReadAsMultipartAsync(blobUploadProvider).ContinueWith(
task =>
{
if (task.IsFaulted || task.IsCanceled)
{
throw task.Exception;
}
var provider = task.Result;
return provider.Uploads.ToList();
});
return list;
}
The service method uses a customized upload provider which is derived from System.Net.Http.MultipartFileStreamProvider and we resolve this using a dependency resolver.
After this, we create the meta deta models for each of those files and then save in the Db using Entity framework. The full process works fine in ideal situation.
The problem is if the upload process is successful but somehow the Db operation fails, then we have files uploaded in Blob storage but there is no corresponding entry in Sql Db, and thus there is data inconsistency.
Following are the different technologies used in the system:
Azure Api App
Azure Blob Storage
Web Api
.Net 4.6.1
Entity framework 6.1.3
Azure MSSql Database (we are not using any VM)
I have tried using TransactionScope for consistency which seems not working for Blob and Db, (works for Db only)
How do we solve this issue?
Is there any built in or supported feature for this?
What are the best practices in this case?
Is there any built in or supported feature for this?
As of today no. Essentially Blob Service and SQL Database are two separate services hence it is not possible to implement "atomic transaction" functionality like you're expecting.
How do we solve this issue?
I could think of two ways to solve this issue (I am sure there would be other as well):
Implement your own transaction functionality: Basically check for the database transaction failure and if that happens delete the blob manually.
Use some background process: Here you would continue to save the data in blob storage and then periodically find out orphaned blobs through some background process and delete those blobs.

Resources