Azure SQL Data Warehouse hanging or not responding to simple query after large BCP operation - azure

I have a preview version of Azure Sql Data Warehouse running which was working fine until I imported a large table (~80 GB) through BCP. Now all the tables including the small one do not respond even to a simple query
select * from <MyTable>
Queries to Sys tables are working still.
select * from sys.objects
The BCP process was left over the weekend, so any Statistics Update should have been done by now. Is there any way to figure out what is making this happen? Or at lease what is currently running to see if anything is blocking?
I'm using SQL Server Management Studio 2014 to connect to the Data Warehouse and executing queries.

#user5285420 - run the code below to get a good view of what's going on. You should be able to find the query easily by looking at the value in the "command" column. Can you confirm if the BCP command still shows as status="Running" when the query steps are all complete?
select top 50
(case when requests.status = 'Completed' then 100
when progress.total_steps = 0 then 0
else 100 * progress.completed_steps / progress.total_steps end) as progress_percent,
requests.status,
requests.request_id,
sessions.login_name,
requests.start_time,
requests.end_time,
requests.total_elapsed_time,
requests.command,
errors.details,
requests.session_id,
(case when requests.resource_class is NULL then 'N/A'
else requests.resource_class end) as resource_class,
(case when resource_waits.concurrency_slots_used is NULL then 'N/A'
else cast(resource_waits.concurrency_slots_used as varchar(10)) end) as concurrency_slots_used
from sys.dm_pdw_exec_requests AS requests
join sys.dm_pdw_exec_sessions AS sessions
on (requests.session_id = sessions.session_id)
left join sys.dm_pdw_errors AS errors
on (requests.error_id = errors.error_id)
left join sys.dm_pdw_resource_waits AS resource_waits
on (requests.resource_class = resource_waits.resource_class)
outer apply (
select count (steps.request_id) as total_steps,
sum (case when steps.status = 'Complete' then 1 else 0 end ) as completed_steps
from sys.dm_pdw_request_steps steps where steps.request_id = requests.request_id
) progress
where requests.start_time >= DATEADD(hour, -24, GETDATE())
ORDER BY requests.total_elapsed_time DESC, requests.start_time DESC

Checkout the resource utilization and possibly other issues from https://portal.azure.com/
You can also run sp_who2 from SSMS to get a snapshot of what's threads are active and whether there's some crazy blocking chain that's causing problems.

Related

force replication of replicated tables

Some of my tables are of type REPLICATE. I would these tables to be actually replicated (not pending) before I start querying my data. This will help me avoid data movement.
I have a script, which I found online, which runs in a loop and do a SELECT TOP 1 on all the tables which are set for replication, but sometimes the script runs for hours. It may seem as the server sometimes won't trigger replication even if you do a SELECT TOP 1 from foo.
How can you force SQL Datawarehouse to complete replication?
The script looks something like this:
begin
CREATE TABLE #tbl
WITH
( DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT
ROW_NUMBER() OVER(
ORDER BY
(
SELECT
NULL
)) AS Sequence
, CONCAT('SELECT TOP(1) * FROM ', s.name, '.', t.[name]) AS sql_code
FROM sys.pdw_replicated_table_cache_state AS p
JOIN sys.tables AS t
ON t.object_id = p.object_id
JOIN sys.schemas AS s
ON t.schema_id = s.schema_id
WHERE p.[state] = 'NotReady';
DECLARE #nbr_statements INT=
(
SELECT
COUNT(*)
FROM #tbl
), #i INT= 1;
WHILE #i <= #nbr_statements
BEGIN
DECLARE #sql_code NVARCHAR(4000)= (SELECT
sql_code
FROM #tbl
WHERE Sequence = #i);
EXEC sp_executesql #sql_code;
SET #i+=1;
END;
DROP TABLE #tbl;
SET #i = 0;
WHILE
(
SELECT TOP (1)
p.[state]
FROM sys.pdw_replicated_table_cache_state AS p
JOIN sys.tables AS t
ON t.object_id = p.object_id
JOIN sys.schemas AS s
ON t.schema_id = s.schema_id
WHERE p.[state] = 'NotReady'
) = 'NotReady'
BEGIN
IF #i % 100 = 0
BEGIN
RAISERROR('Replication in progress' , 0, 0) WITH NOWAIT;
END;
SET #i = #i + 1;
END;
END
Henrik, if 'select top 1' doesn't trigger a replicated table build, then that would be a defect. Please file a support ticket.
Without looking at your system, it is impossible to know exactly what is going on. Here are a couple of things that could be in factoring into extended build time to look into:
The replicated tables are large (size, not necessarily rows) requiring long build times.
There are a lot of secondary indexes on the replicated table requiring long build times.
Replicated table builds require statirc20 (2 concurrency slots). If the concurrency slots are not available, the build will queue behind other running queries.
The replicated tables are constantly being modified with inserts, updates and deletes. Modifications require the table to be built again.
The best way is to run a command like this as part of the job which creates/updates the table:
select top 1 * from <table>
That will force its redistribution at the correct time, without the slow loop through the stored procedure.

ADW - Query performance issues

I have an Azure SQL Warehouse setup of DW500c of gen2 and i have a Data Vault model in it with several tables.
I am trying to execute one query that i think is taking too much time.
Here is the query i have been executing:
SELECT
H_PROFITCENTER.[BK_PROFITCENTER]
,H_ACCOUNT.[BK_ACCOUNT]
,H_LOCALCURRENCY.[BK_CURRENCY]
,H_DOCUMENTCURRENCY.[BK_CURRENCY]
,H_COSTCENTER.[BK_COSTCENTER]
,H_COMPANY.[BK_COMPANY]
,H_CURRENCY.[BK_CURRENCY]
,H_INTERNALORDER.[BK_INTERNALORDER]
,H_VERSION.[BK_VERSION]
,H_COSTELEMENT.[BK_COSTELEMENT]
,H_CALENDARDATE.[BK_DATE]
,H_VALUETYPEREPORT.[BK_VALUETYPEREPORT]
,H_FISCALPERIOD.[BK_FISCALPERIOD]
,H_COUNTRY.[BK_COUNTRY]
,H_FUNCTIONALAREA.[BK_FUNCTIONALAREA]
,SLADI.[LINE_ITEM]
,SLADI.[AMOUNT]
,SLADI.[CREDIT]
,SLADI.[DEBIT]
,SLADI.[QUANTITY]
,SLADI.[BALANCE]
,SLADI.[LOADING_DATE]
FROM [dwh].[L_ACCOUNTINGDOCUMENTITEMS] AS LADI
INNER JOIN [dwh].[SL_ACCOUNTINGDOCUMENTITEMS] AS SLADI ON LADI.[HK_ACCOUNTINGDOCUMENTITEMS] = SLADI.[HK_ACCOUNTINGDOCUMENTITEMS]
LEFT JOIN dwh.H_PROFITCENTERAS H_PROFITCENTER ON H_PROFITCENTER.[HK_PROFITCENTER] = LADI.[HK_PROFITCENTER]
LEFT JOIN dwh.H_ACCOUNT AS H_ACCOUNT ON H_ACCOUNT.[HK_ACCOUNT] = LADI.[HK_ACCOUNT]
LEFT JOIN dwh.H_CURRENCY AS H_LOCALCURRENCY ON H_LOCALCURRENCY.[HK_CURRENCY] = LADI.[HK_LOCALCURRENCY]
LEFT JOIN dwh.H_CURRENCY AS H_DOCUMENTCURRENCY ON H_DOCUMENTCURRENCY.[HK_CURRENCY] = LADI.[HK_DOCUMENTCURRENCY]
LEFT JOIN dwh.H_COSTCENTER AS H_COSTCENTER ON H_COSTCENTER.[HK_COSTCENTER] = LADI.[HK_COSTCENTER]
LEFT JOIN dwh.H_COMPANY AS H_COMPANY ON H_COMPANY.[HK_COMPANY] = LADI.[HK_COMPANY]
LEFT JOIN dwh.H_CURRENCY AS H_CURRENCY ON H_CURRENCY.[HK_CURRENCY] = LADI.[HK_CURRENCY]
LEFT JOIN dwh.H_INTERNALORDERAS H_INTERNALORDER ON H_INTERNALORDER.[HK_INTERNALORDER] = LADI.[HK_INTERNALORDER]
LEFT JOIN dwh.H_VERSION AS H_VERSION ON H_VERSION.[HK_VERSION] = LADI.[HK_VERSION]
LEFT JOIN dwh.H_COSTELEMENT AS H_COSTELEMENT ON H_COSTELEMENT.[HK_COSTELEMENT] = LADI.[HK_COSTELEMENT]
LEFT JOIN dwh.H_DATE AS H_CALENDARDATE ON H_CALENDARDATE.[HK_DATE] = LADI.[HK_CALENDARDATE]
LEFT JOIN dwh.H_VALUETYPEREPORTAS H_VALUETYPEREPORT ON H_VALUETYPEREPORT.[HK_VALUETYPEREPORT] = LADI.[HK_VALUETYPEREPORT]
LEFT JOIN dwh.H_FISCALPERIODAS H_FISCALPERIOD ON H_FISCALPERIOD.[HK_FISCALPERIOD] = LADI.[HK_FISCALPERIOD]
LEFT JOIN dwh.H_COUNTRY AS H_COUNTRY ON H_COUNTRY.[HK_COUNTRY] = LADI.[HK_COUNTRY]
LEFT JOIN dwh.H_FUNCTIONALAREAAS H_FUNCTIONALAREA ON H_FUNCTIONALAREA.[HK_FUNCTIONALAREA] = LADI.[HK_FUNCTIONALAREA]
This query is taking me 22 minutes to execute.
I must say that it returns around 1200000000 rows.
[L_ACCOUNTINGDOCUMENTITEMS] and [SL_ACCOUNTINGDOCUMENTITEMS] are hash distributed by [HK_ACCOUNTINGDOCUMENTITEMS] column and all other tables were created with replicated table distribution.
Also, i activated in azure datawarehouse automatic statistics creation.
Can anyone help me to understand how can i speed it up?
Here are some things to try out to see if you make this faster -
Create a table using 'Create Table as Select' (CTAS) with RoundRobin option for your query and take the timing of that. I have a feeling that returning that large amount of rows to your client could be a big contributor to the time. If the CTAS finishes in lets say 5 minutes, you can safely say that the rest of the time is being taken by return operation.
If not, You can materialize some of the left joins into a table and then add that table to the main query to see if that finishes faster.
You can also look at explain plans to see if you can cut down some steps by aligning the tables on a common key.

Azure SQL 100% DTU usage

i'm having some problems with the reliability of Azure SQL servers.
sometimes doing complicated queries with subqueries like the following:
SELECT DISTINCT [DeviceName] ,name ,data.[Addr] ,[Signal] FROM (SELECT [DeviceName] ,[Signal] ,MAX([Signal]) OVER (PARTITION BY [Addr]) AS 'MaxSignal',[Timestamp] ,[Addr] ,[PartitionId] ,[EventEnqueuedUtcTime] FROM [dbo].[mytable] WHERE CAST([Timestamp] AS DATETIME) > DATEADD(HOUR,+2,(DATEADD(MINUTE, -10, GETDATE()))) ) data LEFT JOIN mytable ON [dbo].[myreftable].[Addr] = data.[Addr] WHERE [Signal] = [MaxSignal];
Is done in almost an instant, like i would assume, at other times simply doing a SELECT COUNT(*) FROM mytable
Is taking upwards of 30 minutes, and showing a DTU usage graph like this:
Anyone know any solutions to this? is it me doing something completely wrong? or is Azure simply not there yet?
what you pay is what you get.You will need to look at what are the top resources consumers in your system.DTU is nothing, but a limit on CPU,IO,Memory available to your database..
so to troubleshoot DTU problems,I would follow below steps..
1.)Below query gives me Resource usage for last 14 days for all resources..
SELECT
(COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats
running above query gives you an idea on how much is your CPU percentage consistently
2.) Below query gives me an idea of resource usage over time..
SELECT start_time, end_time,
(SELECT Max(v) FROM (VALUES (avg_cpu_percent), (avg_physical_data_read_percent), (avg_log_write_percent)) AS value(v)) as [avg_DTU_percent]
FROM sys.resource_stats where database_name = ‘<your db name>’ order by end_time desc
Now that ,i have enough data to look out which metric is more resource intensive,I can follow normal approach of trying to troubleshoot..
Say for Example,if my CPU usage is above 90% consistently over time,I will gather all the queries which are consuming more CPU and try to fine tune them

Effecientcy of using multiple subselects

I'm trying to gather multiple related pieces of data for a master account and create a view (e.g. overdue balance, account balance, debt recovery status, interest hold). Will this approach be effecient? Database platforms are Informix, Oracle and Sql Server. Doing some statistics on Informix I'm just getting 1 sequential scan of auubmast. I assume the sub-selects are quite effecient because they filter down to the account number immediately. I may need many sub-selects before I'm finished. On top of the question of efficiency are there any other 'tidy' approaches?
Thank you.
select
auubmast.acc_num,
auubmast.cls_cde,
auubmast.acc_typ,
(select
sum(auubtrnh.trn_bal)
from auubtrnh, aualtrcd
where aualtrcd.trn_cde = auubtrnh.trn_cde
and auubtrnh.acc_num = auubmast.acc_num
and (auubtrnh.due_dte < current or aualtrcd.trn_typ = 'I')
) as ovd_bal,
(select
sum(auubytdb.ytd_bal)
from auubytdb, auubsvgr
where auubytdb.acc_num = auubmast.acc_num
and auubsvgr.svc_grp = auubmast.svc_grp
and auubytdb.bil_yer = auubsvgr.bil_yer
) as acc_bal,
(select
max(cur_stu)
from audemast
where mdu_acc = auubmast.acc_num
and mdu_ref = 'UB'
) as drc_stu,
(select
hol_typ
from aualhold
where mdu_acc = auubmast.acc_num
and mdu_ref = 'UB'
and pro_num = 2601
and (hol_til is null or hol_til > current)
) as int_hld
from auubmast
In general, the answer to this is that correlated subqueries should be avoided whenever possible.
Using them will result in a full table scan for your view, which is bad. The only times you want to use subqueries like this is if you can limit the range of the main select to only a few rows, or if there really is no other choice.
When you're running into situations like this, you might want to consider adding columns and precalculating them on an update trigger, rather than using subqueries. This will save your database a thrashing.

pt-kill show database in log

We've enabled pt-kill on a few servers of us, but without the killing, only to monitor slow queries for now.
The only problem is, the log doesn't contain the database, only the query. Is there a way to enable in the log on what database the query is executed?
# 2012-09-12T10:31:23 KILL 419539612 (Query 138 sec) SELECT blog.*, blog_text.*, user.*
FROM blog AS blog
INNER JOIN blog_text AS blog_text ON (blog.firstblogtextid = blog_text.blogtextid)
INNER JOIN blog_user AS blog_user ON (blog_user.bloguserid = blog.userid)
LEFT JOIN user AS user ON (user.userid = blog_text.userid)
WHERE 1=1
AND blog.state = 'visible'
AND blog.dateline <= 1347438544
AND blog.pending = 0
AND blog_user.options_guest & 1
AND ~blog.options & 8
ORDER BY blog.dateline DESC
LIMIT 15
Logging the database for the query is not currently a feature of pt-kill (as of version 2.1.x).
This feature has been requested in the past:
https://bugs.launchpad.net/percona-toolkit/+bug/1015804
But it has not been implemented yet.
It's being run by vBulletin, if that helps. You can speed up the query dramatically by indexing some of the referenced fields in the "blog" and "blog_user" tables.

Resources