I want to know what is exactly happening in the following code bit,
IF THIS-PROCEDURE:PERSISTENT
THEN
DELETE PROCEDURE THIS-PROCEDURE.
Actullay I have to fix a bug in a kind of complex old-coded Progress GUI application. Breifly the scenario,
The system is holding various details about members. Among them there is a field for relationships and you could switch to the relation record by clicking a switch button. So the issue is after you switching to the relation record and comeout from the records system keeps a lock in the relation record where we switched in.
I am unable to post anycode as it is massive and complex to show the issue with code but if anyone could get the head around this, you may be able to put me into right direction to sort out this issue. I am expecting an answer like what would be the key things to check to investigate this kind of an issue? Any help would be highly appreciated.
A persistent procedure is one which stays in memory after its initial RUN completes. (Similar to ancient "terminate and stay resident" procedures in DOS...)
The following is over-simplified but may help:
The body of the procedure can be thought of as the "constructor".
Code like that which you show is generally part of an internal procedure (a "method") that serves as the destructor.
Other procedures which know the handle of a persistent procedure can run its methods. "RUN xyz IN proc-handle." (You can avoid the need to know the handle by establishing a persistent procedure as a "session super procedure".)
Your locking problem is probably due to poor record scope. There is probably a reference to the record which is scoped to the procedure as a whole. Some method is then "borrowing" that record scope from the larger procedure and modifying it which requires a lock. The lock is then scoped to the procedure rather than the method.
One trick that I, personally, find helpful is to write internal procedures thusly:
procedure xyz:
define parameter cnum as integer no-undo.
define parameter cname as character no-undo.
define buffer customer for customer. /* this is the "trick" */
find customer exclusive-lock where custNum = cnum.
custName = cname.
end.
The "define buffer" restricts all references to "customer" to that internal procedure (method). This prevents accidental record scope leaks.
Tom gave you a good description of persistent and also pointed out the most likely issue with the code (record scoping and not a persistent procedure issue).
I like to go one step further than Tom suggests and use strong scoping because:
1) The compiler will warn you about attempts to increase the scope
2) It works in both internal procedures and old style top down code
define buffer bfCustomer for Customer.
/*- a bfCustomer reference here gets a compiler error -*/
do for bfCustomer transaction:
find first bfCustomer exclusive-lock no-error.
assign bfCustomer.name = "testme".
end.
/*- a bfCustomer reference here gets a compiler error -*/
Related
I have a Node.js web app with a route that marks some entity as deleted - flipping boolean field in a database. This route returns that entity. Right now I have code that looks like this:
UPDATE entity SET is_deleted=true WHERE entity.id = ?
SELECT * FROM entity WHERE entity.id = ?
For the moment I can't use RETURNING statement for other reasons.
So I got in the argument with colleague, I think that putting both UPDATE and SELECT inside transaction is unnecessary, because we are not doing anything significant with data, just returning it. As a user of the app I would expect that data that is returned is as fresh as possible, meaning that I would get same results on page refresh.
My question is, what is the best practice regarding reading data after write? Do you always wrap reading with writing inside transaction? Or it depends?
Well, for performance reasons you want to keep your transactions as small and quick as possible. This will minimize the chance to have potential locks and deadlocks that could bring your application to its knees. As such, unless there is a very good reason to do so, keep your select statements outside of the transaction. This is specially important if your need to execute a long running select statement. By putting the select inside the transaction, you keep the update locks much longer than needed.
We need to create an atomic routine in our MongoDB database.
We need to iterate through a collection, find the highest number given a field from all documents in the collection, then increment it. We are working with some legacy data that we need to integrate, otherwise we'd have some atomic sequence already in place.
How can I create stored JS or a stored procedure in MongoDB that can run a whole routine atomically?
I am seeing some information but nothing is looking particularly clear to me:
Called a stored javascript function from Mongoose?
https://groups.google.com/forum/#!topic/mongoose-orm/sPN3wfDstX4
https://github.com/mongoosejs/mongoose-function
Where can I find good information how to actually write an atomic/blocking stored procedure that runs in MongoDB, and how to actually invoke the stored procedure from the application?
(summarizing the comments above)
At the moment, there is nothing in mongodb that will allow you to run a piece of arbitrary logic (including, for example, multiple queries to gather data) atomically.
The best atomic thing that mongodb has to offer is findAndModify. Its atomicity is naturally restricted to only one document and you have a pretty limited list of update operators (that is, you can't even use the fields of the document, same as regular updates).
It is somewhat possible using an application-level lock: the application inserts or modifies a special lock document, which will signal to other parts of the application "I'm using/updating this, please refrain from touching it". After the operation is completed, application releases the lock, so it's now free to be re-acquired by someone else. Of course, this relies entirely on actors respecting the lock agreements, which is not very reliable, to put it mildly.
I would like to delete an ovm object (and its children) so that I can recreate it with different configs. Is there a way to do this in OVM?
Currently, when I try to create the object a second time with new, I get the following VCS runtime error:
[CLDEXT] Cannot set 'ap' as a child of 'instance', which already has a child by that name.
I realize that I can simply use a different name to "re-create" the instance, but then I'll still have the old instance sitting around and soaking up memory.
OVM is just a SystemVerilog library. That means that all the rules of SystemVerilog apply to OVM. So, yes, you can use new() with OVM. Sometimes it's preferable to use the factory, and sometimes it's preferable to use new() (that's a topic for a different discussion).
SystemVerilog does not have a delete operator or a destructor like C++. Instead, when you are done with an object you just remove all references to it and the garbage collector will clean up the memory. Here's a quote from the SystemVerilog reference manual (IEEE 1800-2009) section 8.7:
SystemVerilog does not require the complex memory allocation and deallocation of C++. Construction of an object is straightforward; and garbage collection, as in Java, is implicit and automatic. There can be no memory leaks or other subtle behaviors, which are so often the bane of C++ programmers.
It's not entirely true that you cannot have a memory leak. You can forget to remove all references to an object and the garbage collector will not know to pick it up. However, you do not have to worry about memory with the same detail as you do in C++.
The particular error you received with id CLDEXT is from ovm_component class. From the message it appears that you attempted to create two components with the same name and the same parent. Components in OVM are typically static. That is, you create and elaborate them once, usually at time 0, and don't delete or add components after that. Because of this model there are no methods in ovm_component to remove child components. So there really isn't a good way to replace a component once it has been instantiated. By the way, this only applies to components. Other types of objects can be re-allocated.
If you feel that you need to replace a component with a different one after time 0 you should re-think the architecture of your testbench. There are probably betters ways to accomplish what you are trying to do without replacing components.
I have only UVM experience but I think OVM is similar. I would have liked to reply to #Victor Lyuboslavsky's comment but I can't add comments.
The issue is with the name 'ap' which evidently has already been used for a child of 'instance'. Use this code instead.
static int instNum = 0;
instance_ap = my_ovm_extended_class::type_id::create
($sformatf ("ap%0d", instNum), this);
The first time an object is created & the handle assigned to 'instance_ap', the object would have the name 'instance.ap0'. The next time the code executes an object called 'instance.ap1', and so on.
As mentioned by other posters this ought to be done only for non-component objects, and components should be static and must be created during/before the build phase & connected to each other during/before the connect phase.
Try assigning null to the object before calling new again.
Unless I see someone else answer this question, I'd say there is no easy way to deallocate objects in OVM framework.
OVM testbenches are static and created when the testbench is created.
When the environment class is instantiated, it will call new(create), build, connect, end_of_elaboration, start_of_simulation, run and check on all components.
By the end of the environment build phase all components must be created.
By the end of the environment connect phase all components must have their TLM ports connected.
Because of these requirements, you can not change components (or port connections) except for during the phase.
As part of the static nature of the testbench environment, every component must have a unique get_full_name() response. This is because string lookups are used to identify components in the hierarchy.
Assigning an object to null should deallocate memory. If there is no other handle pointing to that memory location, then it should get reclaimed.
I want to write a few asserts around a complicated multithreaded piece of code.
Is there some way to do a
assert(GetCurrentThreadId() == ThreadOfCriticalSection(sec));
If you want to do this properly I think you have use a wrapper object around your critical sections which will track which thread (if any) owns each CS in debug builds.
i.e. Rather than call EnterCriticalSection directly, you'd call a method on your wrapper which did the EnterCriticalSection and then, when it succeeded, stored GetCurrentThreadId in a DWORD which the asserts would check. Another method would zero that thread ID DWORD before calling LeaveCriticalSection.
(In release builds, the wrapper would probably omit the extra stuff and just call Enter/LeaveCriticalSection.)
As Casablanca points out, the owner thread ID is within the current CRITICAL_SECTION structure, so using a wrapper like I suggest would be storing redundant information. But, as Casablanca also points out, the CRITICAL_SECTION structure is not part of any API contract and could change. (In fact, it has changed in past Windows versions.)
Knowing the internal structure is useful for debugging but should not be used in production code.
So which method you use depends on how "proper" you want your solution to be. If you just want some temporary asserts for tracking down problems today, on the current version of Windows, then using the CRITICAL_SECTION fields directly seems reasonable to me. Just don't expect those asserts to be valid forever. If you want something that will last longer, use a wrapper.
(Another advantage of using a wrapper is that you'll get RAII. i.e. The wrapper's constructor and destructor will take care of the InitializeCriticalSection and DeleteCriticalSection calls so you no longer have to worry about them. Speaking of which, I find it extremely useful to have a helper object which enters a CS on construction and then automatically leaves it on destruction. No more critical sections accidentally left locked because a function had an early return hidden in the middle of it...)
As far as I know, there is no documented way to get this information. If you look at the headers, the CRITICAL_SECTION structure contains a thread handle, but I wouldn't rely on such information because internal structures could change without notice. A better way would be to maintain this information yourself whenever a thread enters/exits the critical section.
Your requirement doesn't make sense. If your current thread is not the thread which is in the critical section, then the code within the current thread won't be running, it'll be blocked when trying to lock the critical section.
If your thread is actually inside the critical section, then your assertion will always be true. If it's not, your assertion will always be false!
So what I mean is, assuming you're able to track which thread is in the critical section, if you place your assertion inside the critical section code, it'll always be true. If you place it outside, it'll always be false.
This is a bit of a long question, but here we go. There is a version of FormatDateTime that is said to be thread safe in that you use
GetLocaleFormatSettings(3081, FormatSettings);
to get a value and then you can use it like so;
FormatDateTime('yyyy', 0, FormatSettings);
Now imagine two timers, one using TTimer (interval say 1000ms) and then another timer created like so (10ms interval);
CreateTimerQueueTimer
(
FQueueTimer,
0,
TimerCallback,
nil,
10,
10,
WT_EXECUTEINTIMERTHREAD
);
Now the narly bit, if in the call back and also the timer event you have the following code;
for i := 1 to 10000 do
begin
FormatDateTime('yyyy', 0, FormatSettings);
end;
Note there is no assignment. This produces access violations almost immediatley, sometimes 20 minutes later, whatever, at random places. Now if you write that code in C++Builder it never crashes. The header conversion we are using is the JEDI JwaXXXX ones. Even if we put locks in the Delphi version around the code, it only delays the inevitable. We've looked at the original C header files and it all looks good, is there some different way that C++ uses the Delphi runtime? The thread safe version of FormatDatTime looks to be re-entrant. Any ideas or thoughts from anyone who may have seen this before.
UPDATE:
To narrow this down a bit, FormatSettings is passed in as a const, so does it matter if they use the same copy (as it turns out passing a local version within the function call yeilds the same problem)? Also the version of FormatDateTime that takes the FormatSettings doesn't call GetThreadLocale, because it already has the Locale information in the FormatSettings structure (I double checked by stepping through the code).
I made mention of no assignment to make it clear that no shared storage is being accessed, so no locking is required.
WT_EXECUTEINTIMERTHREAD is used to simplify the problem. I was under the impression you should only use it for very short tasks because it may mean it'll miss the next interval if it is running something long?
If you use a plain old TThread the problem doesn't occur. What I am getting at here I suppose is that using a TThread or a TTimer works but using a thread created outside the VCL doesn't, that's why I asked if there was a difference in the way C++ Builder uses the VCL/Delphi RTL.
As an aside this code as mentioned before also fails (but takes longer), after a while, CS := TCriticalSection.Create;
CS.Acquire;
for i := 1 to LoopCount do
begin
FormatDateTime('yyyy', 0, FormatSettings);
end;
CS.Release;
And now for the bit I really don't understand, I wrote this as suggested;
function ReturnAString: string;
begin
Result := 'Test';
UniqueString(Result);
end;
and then inside each type of timer the code is;
for i := 1 to 10000 do
begin
ReturnAString;
end;
This causes the same kinds of failiures, as I said before the fault is never in the same place inside the CPU window etc. Sometimes it's an access violation and sometimes it might be an invalid pointer operation. I am using Delphi 2009 btw.
UPDATE 2:
Roddy (below) points out the Ontimer event (and unfortunately also Winsock, i.e. TClientSocket) use the windows message pump (as an aside it would be nice to have some nice Winsock2 components using IOCP and Overlapping IO), hence the push to get away from it. However does anyone know how to see what sort of thread local storage is setup on the CreateQueueTimerQueue?
Thanks for taking the time to think and answer this problem.
I am not sure if it is good form to post an "Answer" to my own question but it seemed logical, let me know if that is uncool.
I think I have found the problem, the thread local storage idea lead me to follow a bunch of leads and I found this magical line;
IsMultiThread := True;
From the help;
"IsMultiThread is set to true to indicate that the memory manager should support multiple threads. IsMultiThread is set to true by BeginThread and class factories."
This of course is not set by using a single Main VCL thread using a TTimer, However it is set for you when you use TThread. If I set it manually the problem goes away.
In C++Builder, I do not use a TThread but it appears by using the following code;
if (IsMultiThread) {
ShowMessage("IsMultiThread is True!");
}
that is it set for you somewhere automatically.
I am really glad for peoples input so that I could find this and I hope vainly it might help someone else.
As DateTimeToString which FormatDateTime calls uses GetThreadLocale, you may wish to try having a local FormatSettings variable for each thread, maybe even setting up FormatSettings in a local variable before the loop.
It may also be the WT_EXECUTEINTIMERTHREAD parameter which causes this. Note that it states it should only be used for very short tasks.
If the problem persists the problem may actually be elsewhere, which was my first hunch when I saw this but I don't have enough information about what the code does to really determine that.
Details about where the access violation occurs may help.
Are you sure this actually has anything to do with FormatDateTime? You made a point of mentioning that there is no assignment statement there; is that an important aspect of your question? What happens if you call some other string-returning function instead? (Make sure it's not a constant string. Write your own function that calls UniqueString(Result) before returning.)
Is the FormatSettings variable thread-specific? That's the point of having the extra parameter for FormatDateTime, so each thread has its own private copy that is guaranteed not to be modified by any other thread while the function is active.
Is the timer queue important to this question? Or do you get the same results when you use a plain old TThread and run your loop in the Execute method?
You did warn that it was a long question, but it seems there are a few things you could do to make it smaller, to narrow down the scope of the issue.
I wonder if the RTL/VCL calls you're making are expecting access to some thread-local storage (TLS) variables that aren't correctly set up whn you invoke your code via the timer queue?
This isn't the answer to your problem, but are you aware that TTimer OnTimer events just run as part of the normal message loop in the main VCL thread?
You found your answer - IsMultiThread. This has to be used anytime to revert to using the API and create threads. From MSDN: CreateTimerQueueTimer is creating a thread pool to handle this functionality so you have an outside thread working with the main VCL thread with no protection. (Note: your CS.acquire/release doesn't do anything at all unless other parts of the code respect this lock.)
Re. your last question about Winsock and overlapping I/O: You should look closely at Indy.
Indy uses blocking I/O, and is a great choice when you want high performance network IO regardless of what the main thread is doing. Now you've cracked the multi-threading issue, you should just create another thread (or more) to use indy to handle your I/O.
The problem with Indy is that if you need many connections it's not effiecient at all. It requires one thread per connection (blocking I/O) which doesn't scale at all, hence the benefit of IOCP and Overlapping IO, it's pretty much the only scalable way on Windows.
For update2 :
There is a free IOCP socket components : http://www.torry.net/authorsmore.php?id=7131 (source code included)
"By Naberegnyh Sergey N.. High
performance socket server based on
Windows Completion Port and with using
Windows Socket Extensions. IPv6
supported. "
i've found it while looking better components/library to rearchitecture my little instant messaging server. I haven't tried it yet but it looks good coded as a first impression.