Sql Azure - Timeout on query - azure

I have setup an Azure website with a SQL Server Azure back-end. I used a migration tool to populate a single table with 80000 rows of data. During the data migration I could access the new data via the website without any issues. Since the migration has completed I keep getting a exception: [Win32Exception (0x80004005): The wait operation timed out].
This exception suggests to me that the database queries I am doing are taking more than 30 seconds to return. If I query the database from Visual Studio I can confirm that the queries are taking more than 30 seconds to return. I have indexes on my filter columns and on my local SQL database my queries take less than a second to return. Each row does contain a varchar(max) column that stores json which means that a bit of data is held in each row, but this shouldn't really affect the query performance.
Any input that could help me sole this issue would be much appreciated.

I seem to be around the query timeout issues for now. What appeared to do the trick for me was to update the SQL Server stats.
EXEC sp_updatestats;
Another performance enhancement that worked well was to enable json compression on my azure website.
See: enter link description here

Related

How can I do bulk inserts into the Common Data Service?

I have 1000 records that I need to sync daily from an API. I am currently bulk inserting them into a SQL Database, however I would like to use Dataverse/a Common Data Service database instead.
The Logic App connector seems to do 1 record at a time and the SDK does PUTS and POSTS. How can I either insert 1000 records into the Common Data Service in bulk OR somehow synchronise my SQL DB with the CDS?
As far as I know there is no another way to do that without programming. You can extended your Power Automate Flow with Azure Functions to insert these records in a single transaction.
In this link explain how can be do it.
https://learn.microsoft.com/en-us/powerapps/developer/data-platform/webapi/execute-batch-operations-using-web-api#when-to-use-batch-requests
Please let me know wtih anything
If you want to regularly ingest data (1000 rows) into Dataverse (CDS), then use Dataflows. The following link to MS Docs describes how to set up scheduled bulk data updates. It is therefore a pull rather than push model.
https://learn.microsoft.com/en-us/powerapps/maker/data-platform/create-and-use-dataflows

Logic App to push data from Cosmosdb into CRM and perform an update

I have created a logic app with the goal of pulling data from a container within cosmosdb (with a query), looping over the results and then pushing this data into CRM (or Common Data Service). When the data is pushed to CRM, an ID will be generated. I wish to then update cosmosdb with this new ID. Here is what I have so far:
This next step is querying for the data within our cosmosdb database and selecting all IDS with a length that is greater than 15. (This tells us that the ID is not yet within the CRM database)
Then we loop over the results and push this into CRM (Dynamics365 or the Common Data Service)
Dilemma: The first part of this process appears to be correct, however, I want to make sure that I am on the right track with this. Furthermore, once the data is successfully pushed to CRM, CRM automatically generates an ID for each record. How would I then update cosmosDB with the newly generated IDs?
Any suggestion is appreciated
Thanks
I see a red flag in your approach here with this query with length(c.id) > 15. This is not something I would do. I don't know how big your database is going to be but generally not very performant to do high volumes of cross partition queries, especially if the database is going to keep growing.
Cosmos DB already provides an awesome streaming capability so rather than doing this in a batch I would use Change Feed and use that to accomplish whatever your doing here in your Logic App. This will likely give you better control of the process and likely allow you to get the id back out of your CRM app to insert back into Cosmos DB.
Because you will be writing back to Cosmos DB, you will need a flag to ignore the update in Change Feed when the item is updated.

Bringing incremental data in from REST APIs into SQL azure

My needs are following:
- Need to fetch data from a 3rd party API into SQL azure.
The API's will be queried everyday for incremental data and may require pagination as by default any API response will give only Top N records.
The API also needs an auth token to work, which is the first call before we start downloading data from endpoints.
Due to last two reasons, I've opted for Function App which will be triggered daily rather than data factory which can query web APIs.
Is there a better way to do this?
Also I am thinking of pushing all JSON into Blob store and then parsing data from the JSON into SQL Azure. Any recommendations?
How long does it take to call all of the pages? If it is under ten minutes, then my recommendation would be to build an Azure Function that queries the API and inserts the json data directly into a SQL database.
Azure Function
Azure functions are very cost effective. The first million execution are free. If it takes longer than ten, then have a look at durable functions. For handling pagination, we have plenty of examples. Your exact solution will depend on the API you are calling and the language you are using. Here is an example in C# using HttpClient. Here is one for Python using Requests. For both, the pattern is similar. Get the total number of pages from the API, set a variable to that value, and loop over the pages; Getting and saving your data in each iteration. If the API won't provide the max number of pages, then loop until you get an error. Protip: Make sure specify an upper bound for those loops. Also, if your API is flakey or has intermittent failures, consider using a graceful retry pattern such as exponential backoff.
Azure SQL Json Indexed Calculated Columns
You mentioned storing your data as json files into a storage container. Are you sure you need that? If so, then you could create an external table link between the storage container and the database. That has the advantage of not having the data take up any space in the database. However, if the json will fit in the database, I would highly recommend dropping that json right into the SQL database and leveraging indexed calculated columns to make querying the json extremely quick.
Using this pairing should provide incredible performance per penny value! Let us know what you end up using.
Maybe you can create a time task by SQL server Agent.
SQL server Agent--new job--Steps--new step:
In the Command, put in your Import JSON documents from Azure Blob Storage sql statemanets for example.
Schedules--new schedule:
Set Execution time.
But I think Azure function is better for you to do this.Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Java, or PHP.
It is more intuitive and efficient.
Hope this helps.
If you could set the default top N values in your api, then you could use web activity in azure data factory to call your rest api to get the response data.Then configure the response data as input of copy activity(#activity('ActivityName').output) and the sql database as output. Please see this thread :Use output from Web Activity call as variable.
The web activity support authentication properties for your access token.
Also I am thinking of pushing all JSON into Blob store and then
parsing data from the JSON into SQL Azure. Any recommendations?
Well,if you could dump the data into blob storage,then azure stream analytics is the perfect choice for you.
You could run the daily job to select or parse the json data with asa sql ,then dump the data into sql database.Please see this official sample.
One thing to consider for scale would be to parallelize both the query and the processing. If there is no ordering requirement, or if processing all records would take longer than the 10 minute function timeout. Or if you want to do some tweaking/transformation of the data in-flight, or if you have different destinations for different types of data. Or if you want to be insulated from a failure - e.g., your function fails halfway through processing and you don't want to re-query the API. Or you get data a different way and want to start processing at a specific step in the process (rather than running from the entry point). All sorts of reasons.
I'll caveat here to say that the best degree of parallelism vs complexity is largely up to your comfort level and requirements. The example below is somewhat of an 'extreme' example of decomposing the process into discrete steps and using a function for each one; in some cases it may not make sense to split specific steps and combine them into a single one. Durable Functions also help make orchestration of this potentially easier.
A timer-driven function that queries the API to understand the depth of pages required, or queues up additional pages to a second function that actually makes the paged API call
That function then queries the API, and writes to a scratch area (like Blob) or drops each row into a queue to be written/processed (e.g., something like a storage queue, since they're cheap and fast, or a Service Bus queue if multiple parties are interested (e.g., pub/sub)
If writing to scratch blob, a blob-triggered function reads the blob and queues up individual writes to a queue (e.g., a storage queue, since a storage queue would be cheap and fast for something like this)
Another queue-triggered function actually handles writing the individual rows to the next system in line, SQL or whatever.
You'll get some parallelization out of that, plus the ability to start from any step in the process, with a correctly-formatted message. If your processors encounter bad data, things like poison queues/dead letter queues would help with exception cases, so instead of your entire process dying, you can manually remediate the bad data.

Import data from Clio to Azure database using API v4

Let me start out by saying I am a SQL Server Database expert, not a coder so making API calls is certainly not an everyday task for me.
Having said that, I am trying to use the Azure Data Factory's data copy tool to import data from Clio to an Azure SQL Server database. I have had some limited success, data is copied over using the API and inserted into the target table but paging really seems to be an issue. I am testing this with the billable_clients call and the first 25 records with the fields I specify are inserted along with the paging record. As I understand, the billable_clients call is eligible for bulk actions which may be the solution, although I've not been able to figure out how it works. The url I am calling is below:
https://app.clio.com/api/v4/billable_clients.json?fields=id,unbilled_hours,name
Using Postman I've tried to make the same call while adding X-BULK true to the header but that returns no results. If there is anyone that can shed some light on how the X-BULK header flag is used when making a call, or if anyone has any experience loading Clio data into a SQL Server database I'd love some feedback on your methods.
If any additional information regarding my attempts or setup would help please let me know.
Thanks!
you need to download the json files with Bulk API and then update them in DB.
It isn't possible to directly insert the data

Syncing Problems with Xamarin Forms and Azure Easy Tables

I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.

Resources