AZURE: Cannot clear the Query Store - azure

I am trying to setup the Automatic Tuning for the Azure SQL Database, but I have found out the "Query Store is in read-only state"
So I plan to use the command to clear it to make it run "ALTER DATABASE [QueryStoreDB] SET QUERY_STORE CLEAR", but it got the error
Please help me, thank you.

To set the query store to read write mode use
ALTER DATABASE [QueryStoreDB]
SET QUERY_STORE (OPERATION_MODE = READ_WRITE);
But your error code 615 means that the cache is not in sync with the database, due to connection problems.
https://learn.microsoft.com/en-us/azure/azure-sql/database/troubleshoot-common-errors-issues?view=azuresql
Check that your SQL Server database is online

Tho the SQL command does not work, I found that there is a button that can help me finish the job -> Purge Query Data button at the right down corner.

Related

Import data from Clio to Azure database using API v4

Let me start out by saying I am a SQL Server Database expert, not a coder so making API calls is certainly not an everyday task for me.
Having said that, I am trying to use the Azure Data Factory's data copy tool to import data from Clio to an Azure SQL Server database. I have had some limited success, data is copied over using the API and inserted into the target table but paging really seems to be an issue. I am testing this with the billable_clients call and the first 25 records with the fields I specify are inserted along with the paging record. As I understand, the billable_clients call is eligible for bulk actions which may be the solution, although I've not been able to figure out how it works. The url I am calling is below:
https://app.clio.com/api/v4/billable_clients.json?fields=id,unbilled_hours,name
Using Postman I've tried to make the same call while adding X-BULK true to the header but that returns no results. If there is anyone that can shed some light on how the X-BULK header flag is used when making a call, or if anyone has any experience loading Clio data into a SQL Server database I'd love some feedback on your methods.
If any additional information regarding my attempts or setup would help please let me know.
Thanks!
you need to download the json files with Bulk API and then update them in DB.
It isn't possible to directly insert the data

Issue while creating user for a specific database through code in Azure

I am creating a copy of database in Azure through c# code.
Code for creating database:
CREATE DATABASE ABC AS COPY OF DEF
Then I want to create a user in that database so that only that user can access the database. This code executes as soon as the database is created. but while creating a user I get an error:
"failed to update database because the database is read only".
If I stop the execution for 15-20 seconds, then start, it works perfectly, but I don't want to do that.
Can I get some status that the database is created and you can proceed.
Any help would be greatly appreciated.
It appears that you're connecting to your database and executing T-SQL, you may have to use a query against sys.dm_operation_status and find your Create Database command and whether it has completed. There may be an associated REST API if you choose to program this through REST calls, there is a Get Create or Update Server Status call which might fit your scenario.
You will find that the new database will take some time to create and you won't exit that logic instantly in either approach.

Sybase ASEBulkCopy is not working

Sybase ASEBulkCopy is not working.
I have set the EnableBulkLoad attribute to 1 in the connection string.
It is uploading 1 record at a time even after setting the batch size to 500. The other settings EnableBulkLoad attribute is set to 1 in the connection string.
What other settings am I missing.
Please someone help me with this.
Thanks in advance.
Whether bulk load actually happens depends on other things as well, such as the presence of indexes on the target table. By enabling bulk load you're basically telling the ASE server that it should try to do bulk uploading if it can -- but maybe it cannot so it uses non-bulk.
I'm not sure I understand the details of your question though. What do you mean by "upload"? Does your client app send only 1 record to the ASE server at a time?
Or does it mean that ASE performs regular inserts instead of bulk inserts? If the latter, how did you diagnose that?
I recommend trying it first with the 'bcp' client utility to figure out if bulk loading is possible to start with.

Sql Azure - Timeout on query

I have setup an Azure website with a SQL Server Azure back-end. I used a migration tool to populate a single table with 80000 rows of data. During the data migration I could access the new data via the website without any issues. Since the migration has completed I keep getting a exception: [Win32Exception (0x80004005): The wait operation timed out].
This exception suggests to me that the database queries I am doing are taking more than 30 seconds to return. If I query the database from Visual Studio I can confirm that the queries are taking more than 30 seconds to return. I have indexes on my filter columns and on my local SQL database my queries take less than a second to return. Each row does contain a varchar(max) column that stores json which means that a bit of data is held in each row, but this shouldn't really affect the query performance.
Any input that could help me sole this issue would be much appreciated.
I seem to be around the query timeout issues for now. What appeared to do the trick for me was to update the SQL Server stats.
EXEC sp_updatestats;
Another performance enhancement that worked well was to enable json compression on my azure website.
See: enter link description here

Get Schema error when making Data sync in Azure

I finished setup for the making Azure hub and installing Client Agent and Database .
Then define dataset.
That time whatever database i chose and click get latest schema, got the error.
Error is
The get schema request is either taking a long time or has failed.
When check log ,it said like below :
Getting schema information for the database failed with the exception "There is already an open DataReader associated with this Command which must be closed first.
For more information, provide
tracing id ‘xxxx’ to customer support.
Any idea for this?
the current release has maximum of 500 tables in sync group. also, the drop down for the tables list is restricted to this same limit.
here's a quick workaround:
script the tables you want to sync
create a new temporary database and run the script to create the tables you want to sync
register and add the new temporary database as a member of the sync group
use the new temporary database to pick the tables you want to sync
add all other databases that you want to sync with (on-premise databases and hub database)
once the provisioning is done, remove the temporary database from the sync group.

Resources