Getting into deadlock while using rowcount delete query - sap-ase

Set rowcount 50000
declare #i int
select #i = 1
WHILE ( #i > 0 )
BEGIN
DELETE table1
FROM table1 (index index1)
WHERE
HIST_Timestamp < '2011/11/26'
select #i = ##rowcount
END
The query sometimes encounters a deadlock situation and terminates.. Not able to figure out what is going wrong .. Please help me!

A deadlock occurs when transaction A locks a record then has to wait for transaction B to unlock a record, while transaction B is waiting on a record already locked by transaction A.
If you really want to know why the deadlock is happening, you can do it with this command:
sp_configure "print deadlock information", 1
Creating a useful index for the query allows the delete statement to use page or row locks, improving concurrent access to the table. If creating an index for the delete transaction is not possible, you can perform the operation in a cursor, with frequent commit transaction statements to reduce the number of page locks.

Related

How to append to error column when using the Macro Design (before change) inside of MS Access?

I have an MS Access database that receives hundreds of row records coming into it. I need a way to validate that the incoming data is consistent with the business logic. For example, when we get a record the state column should be "California", otherwise an error should be appended to an Error column that specifies why it failed. And for that same record if the Income is less than $1,000,000 an error should be appended for that too.
I found that inside MS Access while highlighting your table if you at the top bar click on Table > Before Change, you can create If-then logic for incoming rows. If there is a better way to accomplish this task, please let me know.
Once I am inside the "Before change" window I then write this logic out:
If [State] <> "California" Then
SetField
Name Error
Value = "Failed on incorrect state"
Else if [Income] < 1000000 Then
SetField
Name Error
Value = "Failed on incorrect income"
End if
When multiple errors occur such as both incorrect state and income it only shows the first error. Is there a way to have both errors appended to the same error column?
Thanks
You can try to use AND statement in the IF section.
If [STATE] <> "California" && [INCOME] < 10000000 Then
//Write your logic here. Such as "Incorrect Value".
Hope it works for you.

Nested subquery in FOR ALL ENTRIES

Consultant sent me this code example, here is something he expects to get
SELECT m1~vbeln_im m1~vbelp_im m1~mblnr smbln
INTO CORRESPONDING FIELDS OF TABLE lt_mseg
FROM mseg AS m1
INNER JOIN mseg AS m2 ON m1~mblnr = m2~smbln
AND m1~mjahr = m2~sjahr
AND m1~zeile = m2~smblp
FOR ALL ENTRIES IN lt_vbfa
WHERE
AND m2~bwart = '102'
AND 0 = ( select SUM( ( CASE
when SHKZG = 'S' THEN 1
when SHKZG = 'H' THEN -1
else 0
END ) *MENGE ) MENGE
into lt_mseg-summ
from mseg
where
VBELN_IM = m1~vbeln_im
and VBELP_IM = m1~vbelp_im
).
The problem is I don't see how that should work in current syntax. I think about deriving internal select and using it as condition to main one, but is there a proper way to write this nested construction?
As i get it, if nested statement = 0, then main query executes. The problem here is the case inside nested statement. Is it even possible in ABAP? And in my opinion this check could be used outside from main SQL query.
Any suggestions are welcome.
the logic that you were given is part of Native/Open SQL and has some shortcomings that you need to be aware of.
the statement you are showing has to be placed between EXEC SQL and ENDEXEC.
the logic is platform dependent.
there is no syntax checking performed between the EXEC and ENDEXEC
the execution of this bypasses the database buffering process, so its slower
To me, I would investigate a better way to capture the data that performs better outside of open/native sql.
If you want to move forward with this type of logic, below are a couple of links which should be helpful. There is an example select using a nested select with a case statement.
Test Program
Example Logic
This is probably what you need, it works at least since ABAP 750.
SELECT vbeln UP TO 100 ROWS
FROM vbfa
INTO TABLE #DATA(lt_vbfa).
DATA(rt_vbeln) = VALUE range_vbeln_va_tab( FOR GROUPS val OF <line> IN lt_vbfa GROUP BY ( low = <line>-vbeln ) WITHOUT MEMBERS ( sign = 'I' option = 'EQ' low = val-low ) ).
SELECT m1~vbeln_im, m1~vbelp_im, m1~mblnr, m2~smbln
INTO TABLE #DATA(lt_mseg)
FROM mseg AS m1
JOIN mseg AS m2
ON m1~mblnr = m2~smbln
AND m1~mjahr = m2~sjahr
AND m1~zeile = m2~smblp
WHERE m2~bwart = '102'
AND m1~vbeln_im IN ( SELECT vbelv FROM vbfa WHERE vbelv IN #rt_vbeln )
GROUP BY m1~vbeln_im, m1~vbelp_im, m1~mblnr, m2~smbln
HAVING SUM( CASE m1~shkzg WHEN 'H' THEN 1 WHEN 'S' THEN -1 ELSE 0 END * m1~menge ) = 0.
Yes, aggregating and FOR ALL ENTRIES is impossible in one SELECT, but you can trick the system with range and subquery. Also you don't need three joins for summarizing reversed docs, your SUM subquery is redundant here.
If you need to select documents not only by delivery number but also by position this will be more complicated for sure.

Meteor connection count

What would be the best way to record a live count of connections using the Meteor framework? I have the requirement of live sharing users online and have resorted to creating a collection and just replacing a record on initialize for each user, but the count seems to reset, what I have so far below, thanks in advanced.
Counts = new Meteor.Collection "counts"
if Meteor.is_client
if Counts.findOne()
new_count = Counts.findOne().count + 1
Counts.remove {}
Counts.insert count: new_count
Template.visitors.count = ->
Counts.findOne().count
if Meteor.is_server
reset_data = ->
Counts.remove {}
Counts.insert count: 0
Meteor.startup ->
reset_data() if Counts.find().count() is 0
You have a race condition when you trust in "get count value, remove from collection, insert in collection the new count". Clients can get the value X in the same time. It's not the way to go.
Instead it, try to make each client insert "itself" in a collection. Put a unique id and the "time" it was inserted. Use Meteor.Method to implement a heartbeat, refreshing this "time".
Clients with too old time can be deleted from the collection. Use a timer in the server to remove idle clients.
You can check some of this here:
https://github.com/francisbyrne/hangwithme/blob/master/server/game.js

Classic ASP - When to close recordset

I would like to know, which of the following examples is the best for closing a recordset object in my situation?
1)
This one closes the object inside the loop but opens a new object when it moves next. If there were 1000 records, this opens an object 1000 times and closes it 1000 times. This is what I would normally do:
SQL = " ... "
Set rs1 = conn.Execute(SQL)
While NOT rs1.EOF
SQL = " ... "
Set rs2 = conn.Execute(SQL)
If NOT rs2.EOF Then
Response.Write ( ... )
End If
rs2.Close : set rs2 = Nothing
rs1.MoveNext
Wend
rs1.Close : Set rs1 = Nothing
2)
This example is what I want to know about. Does saving the object closure (rs2.close) until after the loop has finished, gains or reduces performance? If there were 1000 records, this would open 1000 objects but only closes it once:
SQL = " ... "
Set rs1 = conn.Execute(SQL)
While NOT rs1.EOF
SQL = " ... "
Set rs2 = conn.Execute(SQL)
If NOT rs2.EOF Then
Response.Write ( ... )
End If
rs1.MoveNext
Wend
rs1.Close : Set rs1 = Nothing
rs2.Close : set rs2 = Nothing
I hope I've explained myself well enough and it's not too stupid.
UPDATE
To those who think my query can be modified to avoid the N+1 issues (2nd query), here it is:
This is for an online photo library. I have two tables; "photoSearch" and "photos". The first, "photoSearch", has just a few columns and contains all searchable data for the photos, such as "photoID", "headline", "caption", "people", "dateCaptured" and "keywords". It has a multi-column full-text index on (headline, caption, people, keywords). The second table, "photos", contains all of the photos data; heights, widths, copyrights, caption, ID's, dates and much more. Both have 500K+ rows and the headline and caption fields sometimes return 2000+ characters.
This is approximately how the query looks now:
(things to note: I cannot use joins with fulltext searching, hence keywords being stored in one column - in a 'de-normalized' table. Also, this kind of pseudo code as my app code is elsewhere - but it's close )
SQL = "SELECT photoID FROM photoSearch
WHERE MATCH (headline, caption, people, keywords)
AGAINST ('"&booleanSearchStr&"' IN BOOLEAN MODE)
AND dateCaptured BETWEEN '"&fromDate&"' AND '"&toDate&"' LIMIT 0,50;"
Set rs1 = conn.Execute(SQL)
While NOT rs1.EOF
SQL = "SELECT photoID, setID, eventID, locationID, headline, caption, instructions, dateCaptured, dateUploaded, status, uploaderID, thumbH, thumbW, previewH, previewW, + more FROM photos LEFT JOIN events AS e USING (eventID) LEFT JOIN location AS l USING (locationID) WHERE photoID = "&rs1.Fields("photoID")&";"
Set rs2 = conn.Execute(SQL)
If NOT rs2.EOF Then
Response.Write ( .. photo data .. )
End If
rs2.Close
rs1.MoveNext
Wend
rs1.Close
When tested, having the full-text index on its own table, "photoSearch", instead of the large table, "photos", seemed to improve speed somewhat. I didn't add the "photoSearch" table, it was already there - this is not my app. If I try joining the two tables to lose the second query, I lose my indexing all together, resulting in very long times - so I can't use joins with full-text. This just seemed to be the quickest method. If it wasn't for the full-text and joining problems, I would have combined both of these queries already.
Here is the thing. First, get your photo ids and make mysql thinks that is an actual table that hold the photo ids only, and then make your actual statement, no need any extra recordset connections...
And do not forget to start from the end to do this. Here is the sample code with explanations:
Step 1 Create photo ids lookup table and name it: This will our PhotoId Lookup Table so name it as "PhotoIds"
SELECT photoID FROM photoSearch
WHERE MATCH (headline, caption, people, keywords)
AGAINST ('"&booleanSearchStr&"' IN BOOLEAN MODE)
AND dateCaptured BETWEEN '"&fromDate&"' AND '"&toDate&"' LIMIT 0,50) AS PhotoIds
Step 2 Now we have photo ids, so get the informations from it. We will insert the above statement just before WHERE clause the same way as we do with real tables. Note that our "fake" table must be between parantheses.
SQL = "SELECT p.photoID, p.setID, p.eventID, p.locationID, p.headline, p.caption, + more FROM
photos AS p,
events AS e USING (p.eventID),
location AS l USING (p.locationID),
(SELECT photoID FROM photoSearch WHERE MATCH (headline, caption, people, keywords)
AGAINST ('"&booleanSearchStr&"' IN BOOLEAN MODE) AND dateCaptured BETWEEN
'"&fromDate&"' AND '"&toDate&"' LIMIT 0,50) AS PhotoIds
WHERE p.photoID=PhotoIds.photoID;"
Note: I just write these codes here and never tested. There may be some spelling errors or smt. Please let me know if you have troubles.
Now getting your primary question
No need to close the executed queries, especially if you are using execute method. Execute method closes itself after the execution unless its not returning any recordset data (thats the purpose of execute command at the first place) like: "INSERT", "DELETE", "UPDATE". If you didnt open a recordset object, so why try to close something never opened? Instead you can use Set Rs=Nothing to unreference the object and send to the garbage collection to free up some system resources (and thats nothing to do with mysql itself). If you are using "SELECT" queries, (the queries that will return some data) you must Open a recordset object (ADODB.Recordset) and if you opened it, you need to close it as soon as it finishes its job.
The most important thing is to close the "main connection to mysql server" after each page load. So you may consider to put your connection close algorithm (not recordset close) to an include file and insert it at the end of everypage you make the connection to the database. The long talk short: You must use Close() if you used Open()
If you show us your SQL Statements, maybe we can show you how to combine them into a single SQL statement so you only have to do one loop, otherwise, double looping like this really takes a toll on the servers performance. But before I learned Stored Procedures and Joins, I would have probably done it like this:
Set Conn = Server.CreateObject("Adodb.Connection")
Conn.Open "ConnectionString"
Set oRS = Server.CreateObject("Adodb.Recordset")
oRS.Open "SQL STATEMENT", Conn
Set oRS2 = Server.CreateObject("Adodb.Recordset")
oRS2.ActiveConnection = Conn
Do Until oRS.EOF
oRS2.Open "SQL STATEMENT"
If oRS2.EOF Then ...
oRS2.Close
oRS.Movenext
Loop
oRS.Close
Set oRS = Nothing
Set oRS2 = Nothing
Set Conn = Nothing
I tried putting this in a comment because it doesn't directly answer your original question, but it got too long.. :)
You could try using a sub-query instead of a join, nesting the outer query inside the second one. " ... where photoID in(select photoID from photoSearch ... )". Not sure if it would get better results, but it may be worth trying. That being said, the use of the full-text search does change how the queries would be optimized, so it may take more work to figure out what the appropriate indexes are (need to be). Depending on your existing performance, it may not be worth the effort.
Do you know for sure that this existing code/query is the current bottleneck? Sometimes we spend time optimizing things that we think are the bottleneck when that may not be the case... :)
One additional thought - you may want to consider some caching logic to reduce the amount of redundant queries you may be making - either at the page level or at the level of this method. The search parameters could be concatenated together to form the key for storing the data in a cache of some sort. Of course you would need to handle appropriate cache invalidation/expiry logic. I've seen systems speed up 100x with very simple caching logic added to bottlenecks like this.
It's simple ask about the state of your RecordSet is 1 or 0 , It means open or close
like this
If RS.State = 1 Then RS.Close
the connection to database (CN) will still up but you can reopen the RS (RecordSet) again with any values

GroupBy then ObserveOn loses items

Try this in LinqPad:
Observable
.Range(0, 10)
.GroupBy(x => x % 3)
.ObserveOn(Scheduler.NewThread)
.SelectMany(g => g.Select(x => g.Key + " " + x))
.Dump()
The results are clearly non-deterministic, but in every case I fail to receive all 10 items. My current theory is that the items are going through the grouped observable unobserved as the pipeline marshals to the new thread.
Linqpad doesn't know that you're running all of these threads - it gets to the end of the code immediately (remember, Rx statements don't always act synchronously, that's the idea!), waits a few milliseconds, then ends by blowing away the AppDomain and all of its threads (that haven't caught up yet). Try adding a Thread.Sleep to the end to give the new threads time to catch up.
As an aside, Scheduler.NewThread is a very inefficient scheduler, EventLoopScheduler (create exactly one thread), or Scheduler.TaskPool (use the TPL pool, as if you created a Task for each item) are much more efficient (of course in this case since you only have 10 items, Scheduler.Immediate is the best!)
It appears here that the problem is in timing between starting the subscription to the new group in the GroupBy operation and the delay of implementing the new subscription. If you increase the number of iterations from 10 to 100, you should start seeing some results after a period of time.
Also, if you change the GroupBy to .Where(x => x % 3 == 0), you will likely notice that no values are lost because the dynamic subscription to the IObservable groups doesn't need to initialize new observers.

Resources