SQL Insert Update on tables with Select causing Locks and wait issue may using MERGE - azure

We have two database tables : PointsMaster (PM) and PointsDetails (PD), the requirement here is to check if PM table has currentbalance of greater than "x" and if it is greater than "x" then update the PM table's currentbalance (i.e. reduce currentbalance by "x") and then also put a new record in the PD table about this "x" points.
If the currentbalance is lower than "x" then no updates to PM table and no insert to PD table.
If currentbalance is NULL then simply we need to return back null.
Right now writing these individual:- SELECT and then UPDATE (PM) and INSERT (PD) is causing lock issues and I am thinking of using "MERGE" (on PM Table) so I can do SELECT and UPDATE in one transaction and not cause any locks on PM table but my issue here is how do I check if CurrentBalance is Null or Lower than "X" and return that back so that processing can happen for those 2 conditions (i.e. of Null balance or when balance is less than "x").
MERGE can help me here with SELECT and if MATCHED UPDATE on PM.. but the INSERT on PD has to happen only if UPDATE on PM happened else not and also if Balance is lower than "x" or Null I have to return that back and not UPDATE PM or ADD record to PD.
Any help here? As the code we have is causing locks and waits and is not efficient with volume of data. PM and PD tables have required indexes in place.
All of the above is being done from C#.Net in a Azure hosted Function App.
Thinking of following SQL Code
Declare #rcount INT;
MERGE PointsMaster as tgt
Using PointsMaster as src
ON (tgt.ProfileID = src.ProfileID)
WHEN MATCHED AND tgt.ProfileID ='3589153' and tgt.CurrentBalance > 50000000
Then UPDATE SET tgt.CurrentBalance = tgt.CurrentBalance - 5;
Set #rcount= ##ROWCOUNT;
if (#rcount > 0)
Begin
INSERT INTO PointsDetails ( PointsHeaderID,Debit,Credit,ActionTypeCode,TransactionDate) Values (3587854,5,0,'AT2',Getdate())
End
if (#rcount = 0)
Begin
SELECT * from PointsMaster where ProfileID = '3589153'
End

Related

Excel removes my query connection on it's own and gives me several error messages

I know that this is a really long post but I'm not sure of what part of my process is making my file crash, so I tried to detail everything about what I did to get to the error messages.
So, first of all, I created a query on Kusto, which looks something similar to this but in reality is 160 lines of code, this is just a summarized version of what my code might do just to show my working process.
First, what I do in Session_Id_List is create a list of all distinct Session Id's from the past day.
Then on treatment_alarms1 I count the amount of alarms for each type of alarm that was active during each session.
Then, on treatment_alarms2 I create a list which might look something like this
1x Alarm_Type_Number1
30x Alarm_Type_Number2
7x Alarm_Type_Number3
and like that for each treatment, so I have a list of all alarms that were active for that treatment.
Lastly, I create a left outer join with Session_Id_List and treatment_alarms2. This means that I will get shown all of the treatment ID's, even the ones that did not have any active alarms.
let _StartTime = ago(1d);
let _EndTime = ago(0d);
let Session_Id_List = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime
| summarize by SessionId, SerialNumber, StartTime
| distinct SessionId, StartTime, SerialNumber;
let treatment_alarms1 = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime and TranslatedData_Status == "ALARM_ACTIVE"
| summarize number_alarms = count() by TranslatedData_Value, SessionId
| project final_Value = strcat(number_alarms, "x ", TranslatedData_Value), SessionId;
let treatment_alarms2 = Database1
| where StartTime >= _StartTime and StartTime <= _EndTime and TranslatedData_Status == "ALARM_ACTIVE"
| join kind=inner treatment_alarms1 on SessionId
| summarize list_of_alarms_all = make_set(final_Value) by SessionId
| project SessionId, list_of_alarms_all;
let final_join = Session_Id_List
| join kind=leftouter treatment_alarms2 on SessionId;
final_join
| project SessionId, list_of_alarms_all
Then I put this query into Excel, by using the following method
I go to Tools -> Query to Power BI on Kusto Explorer
I go to Data -> Get Data -> From Other Sources -> Blank Query
I go to advanced editor
I copy and paste my query and press "Done" at the bottom
If you see now, the preview of my data will show "List" on the list_of_alarms_all column, rather than showing me the actual values of the list.
To fix this issue I first press the arrows on the header of the column
I press on "Extract Values"
I select Custom -> Concatenate using special characters -> Line Feed -> Press OK
That works fine for all of the ID's that do have alarms on them, it shows them as a list and tells me how many there are, the issue is with the ID's that did not have any treatments where I get "Error" on the Excel preview. Once I press "Close & Load" the data is put on the worksheet and it looks fine, the "Error" are all gone and instead I get empty cells where the "Error" would be at.
The problem now starts when I close the file and try to open it again.
First I get this message. So I press yes to try and enter the file.
Then I get this other message. The problem with this message is that it says that I have the file open when that is not true. I even tried to restart my laptop and open the file again and I would still get the message when in reality I don't have that file open.
Then I get this message, telling me that the connection to the query was removed.
So my problem here is that 1) I can't edit the file anymore unless I make a copy because I keep getting the message saying that I already have the file opened and it is locked for editing and 2) I would like to refresh this query with VBA maybe once a week from now on but I can't because when I save the file the connection to the query is deleted by excel itself.
I'm not sure of why this is happening, I'm guessing it's because of the "Error" I get on the empty cells when I try to extract the values from the lists. If anybody has any information on how I can fix this so I don't get these error messages please let me know.
I was not able to reproduce your issue, however there are some things you might want to try.
Within ADX, you could wrap you query with a function, so you won't have to copy a large piece of code into your Excel.
You could deal with null values (this is what gives you the Error values) already in your query. Note the use of coalesce.
// I used datatable to mimic your query results
.create-or-alter function SessionAlarms()
{
datatable (SessionId:int,list_of_alarms_all:dynamic)
[
1, dynamic([10,20,30])
,2, dynamic([])
,3, dynamic(null)
]
| extend list_of_alarms_all = coalesce(list_of_alarms_all, dynamic([]))
}
You can use Power Query ADX connector and copy your query/function As Is
If you haven't dealt with null values in you KQL you can take care of the error in Excel by using Replace Errors

SQL trigger with parameter

I have a nodejs app with SQL Server. I want to be able to update a table for a "specific org" based on an insert and delete action. Let's say I have 2 tables as follows:
Project: projId, orgId, projName
Tasks: taskId, projId, taskName
Users: userId, orgId, userName
OrganizationStats: numberOfProjects, numberOfUsers, numberOfTasks orgId
So let's say I add a new project for an organization where orgId = 1. My insert statement from Nodejs would be:
insert into project (projId, orgId, projName)
values (${'projId'}, ${'orgId'}, 'New Project');
I want to write a trigger in SQL Server that adds 1 to the numberOfProjects column with orgId that's passed in.
create trigger updateProjectAfterInsert
on project
after insert
as
begin
update OrganizationStats
set numprojects = numberOfProjects + 1
where orgId = 'THE_INSERTED_ORGID_VALUE';
end;
My problem is I don't know how to pass the ${'orgId'} to the trigger.
I'm going to expand on my comment here:
Personally, I recommend against storing values which can be calculated by an aggregate. If you need such information easily accessible, you're better off making a VIEW with the value in there, in my opinion.
What I mean by this is that NumProjects has "no right" being in the table OrganizationStats, instead it should be calculated at the time the information is needed. You can't use an aggregate function in a computed column's definition without a scalar function, and those can be quite slow. Instead I recommend creating a VIEW (or if you prefer table value function) to give you the information from the table:
CREATE VIEW dbo.vw_OrganisationStats AS
SELECT {Columns from OrganizationStats},
P.Projects AS NumProjects
FROM dbo.OrganizationStats OS
CROSS APPLY (SELECT COUNT(*) AS Projects
FROM dbo.Projects P
WHERE P.OrgID = OS.OrgID) P;
I use a CROSS APPLY with a subquery, as then you don't need a huge GROUP BY at the end.
I think what you want this something like this:
CREATE TRIGGER updateProjectAfterInsert
ON Project
AFTER INSERT
AS
BEGIN
UPDATE OrganizationStats
SET NumProjects = NumProjects + 1
WHERE OrgId IN (SELECT OrgId FROM inserted);
END;
Also note, Triggers must always assume multiple rows. It's possible to insert multiple rows, update multiple rows, and delete multiple rows. The "inserted" and "deleted" collections contain the data needed ("inserted" contains the rows being inserted, "deleted" contains the rows being deleted, and on an update "inserted" contains the values after the update, and "deleted" contains the values before the update).

How to make the query work in linux server?

I have an database in PostgreSQL called "myDatabase" which has hundreds of schema and which is installed in Linux server. I am using this DB for an SAAS application. Which has multiple schema users. I want to update a column value in a table for selected schema
There is a particular column 'Percentage' in a table called 'sales' which i want to update the column value for all the existing users(Schema). So i have written a script to update the values in all schemas, this script is working in windows server but when i trying to execute this script in linux server, it shows an error
enter code hereThe below script i have written,
DO
$do$
DECLARE
_schema text;
BEGIN
FOR _schema IN
SELECT quote_ident(nspname) -- prevent SQL injection
FROM pg_namespace n
WHERE nspname !~~ 'pg_%' and nspname between 'schema1' and 'schema50'
AND nspname <> 'information_schema'
LOOP
EXECUTE 'SET LOCAL search_path = ' || _schema;
UPDATE sales SET sales.Percentage = 15;
END LOOP;
END
$do$
The above script is working in windows server but it is not working linux server. The error is given below
ERROR: column "sales" of relation "sales" does not exist
LINE 1: UPDATE sales SET sales.Percentage = 5
^
QUERY: UPDATE sales SET sales.Percentage = 5
CONTEXT: PL/pgSQL function inline_code_block line 10 at SQL statement
SQL state: 42703
Any help will be appreciated
Do not specify table name before the column name in the update:
UPDATE sales SET Percentage = 15
Here is the example that demonstrates this:
laika=# create table a (i integer);
CREATE TABLE
laika=# update a set a.i = 1;
ERROR: column "a" of relation "a" does not exist
LINE 1: update a set a.i = 1;
^
laika=# update a set i = 1;
UPDATE 0

How truncate time while querying documents for date comparison in Cosmos Db

I have document contains properties like this
{
"id":"1bd13f8f-b56a-48cb-9b49-7fc4d88beeac",
"name":"Sam",
"createdOnDateTime": "2018-07-23T12:47:42.6407069Z"
}
I want to query a document on basis of createdOnDateTime which is stored as string.
query e.g. -
SELECT * FROM c where c.createdOnDateTime>='2018-07-23' AND c.createdOnDateTime<='2018-07-23'
This will return all documents which are created on that day.
I am providing date value from date selector which gives only date without time so, it gives me problem while comparing date.
Is there any way to remove time from createdOnDateTime property or is there any other way to achieve this?
CosmosDB clients are storing timestamps in ISO8601 format and one of the good reasons to do so is that its lexicographical order matches the flow of time. Meaning - you can sort and compare those strings and get them ordered by time they represent.
So in this case you don't need to remove time components just modify the passed in parameters to get the result you need. If you want all entries from entire date of 2018-07-23 then you can use query:
SELECT * FROM c
WHERE c.createdOnDateTime >= '2018-07-23'
AND c.createdOnDateTime < '2018-07-24'
Please note that this query can use a RANGE index on createdOnDateTime.
Please use User Defined Function to implement your requirement, no need to update createdOnDateTime property.
UDF:
function con(date){
var myDate = new Date(date);
var month = myDate.getMonth()+1;
if(month<10){
month = "0"+month;
}
return myDate.getFullYear()+"-"+month+"-"+myDate.getDate();
}
SQL:
SELECT c.id,c.createdOnDateTime FROM c where udf.con(c.createdOnDateTime)>='2018-07-23' AND udf.con(c.createdOnDateTime)<='2018-07-23'
Output :
Hope it helps you.

Query WadPerformanceCountersTable in Increments?

I am trying to query the WadPerformanceCountersTable generated by Azure Diagnostics which has a PartitionKey based on tick marks accurate up to the minute. This PartitionKey is stored as a string (which I do not have any control over).
I want to be able to query against this table to get data points for every minute, every hour, every day, etc. so I don't have to pull all of the data (I just want a sampling to approximate it). I was hoping to using the modulus operator to do this, but since the PartitionKey is stored as a string and this is an Azure Table, I am having issues.
Is there any way to do this?
Non-working example:
var query =
(from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
long.Parse(entity.PartitionKey) % interval == 0 && //bad for a variety of reasons
String.Compare(entity.PartitionKey, partitionKeyEnd, StringComparison.Ordinal) < 0 &&
String.Compare(entity.PartitionKey, partitionKeyStart, StringComparison.Ordinal) > 0
select entity)
.AsTableServiceQuery();
If you just want to get a single row based on two different time interval (now and N time back) you can use the following query which returns the single row as described here:
// 10 minutes span Partition Key
DateTime now = DateTime.UtcNow;
// Current Partition Key
string partitionKeyNow = string.Format("0{0}", now.Ticks.ToString());
DateTime tenMinutesSpan = now.AddMinutes(-10);
string partitionKeyTenMinutesBack = string.Format("0{0}", tenMinutesSpan.Ticks.ToString());
//Get single row sample created last 10 mminutes
CloudTableQuery<WadPerformanceCountersTable> cloudTableQuery =
(
from entity in ServiceContext.CreateQuery<PerformanceCountersEntity>("WADPerformanceCountersTable")
where
entity.PartitionKey.CompareTo(partitionKeyNow) < 0 &&
entity.PartitionKey.CompareTo(partitionKeyTenMinutesBack) > 0
select entity
).Take(1).AsTableServiceQuery();
The only way I can see to do this would be to create a process to keep the Azure table in sync with another version of itself. In this table, I would store the PartitionKey as a number instead of a string. Once done, I could use a method similar to what I wrote in my question to query the data.
However, this is a waste of resources, so I don't recommend it. (I'm not implementing it myself, either.)

Resources