Smalltalk has automatic garbage collection right? So, is that to say that I can do something like the following with no unexpected side-affects?
transactions := Set new.
transactions add: tran1.
transactions add: tran2.
transactions add: tran3.
transactions add: tran4.
...
transactions add: tran899.
transactions add: tran900.
||| ~~ Do some stuff ~~ |||
transactions post.
transactions := Set new.
Yes, the reference count will go to zero when you reassign the transaction variable, and the previously-referenced object will be cleaned up forthwith.
Related
For multithreaded env in our application we implemented oracle skip locked where no two threads pick up the same record in the database for processing(We add 'waiting' to 'working' flag).
Now, we have a modification where if two records in the database which are queued for processing have the same ID(workid) should not picked up the same time.(i.e - the other record status should not be updated to WORKING if already one record has a flag was 'WORKING'
Can someone help as to how this can be achieved?
Below is the procedure for the same where single record is locked without comparison.
create or replace PROCEDURE DEQUEUE_MANAGER(
v_managerName IN Queue.manager%TYPE,
v_workid IN VARCHAR2,
v_key OUT NUMBER,
v_datatablekey OUT Queue.DATA_TABLE_KEY%TYPE,
v_tasktype OUT Queue.TASK_TYPE%TYPE,
v_no_of_attempts OUT Queue.ATTEMPT%TYPE,
v_result OUT NUMBER,
v_error_desc OUT VARCHAR2)
IS
v_recordrow Queue%ROWTYPE;
v_qeuuestatus VARCHAR2(255);
v_updatedstaus VARCHAR2(255);
CURSOR c IS
select *
from QUEUE
WHERE MANAGER =v_managerName
AND STATUS =v_qeuuestatus
AND workid=v_workid
AND DATE_RELEASE<=SYSDATE
FOR UPDATE SKIP LOCKED;
BEGIN
v_result := -1;
v_qeuuestatus :='WAITING';
v_updatedstaus:='WORKING';
v_tasktype :='';
v_datatablekey:=-1;
v_key:=-1;
v_error_desc:='No Data Found';
v_no_of_attempts:=0;
OPEN c;
FOR i IN 1..1 LOOP
FETCH c INTO v_recordrow;
EXIT WHEN c%NOTFOUND;
select v_recordrow.key into v_key
from QUEUE
where key = v_recordrow.key
for update;
UPDATE Queue
SET STATUS=v_updatedstaus
WHERE KEY=v_recordrow.key;
COMMIT;
v_datatablekey:=v_recordrow.data_table_key;
v_tasktype := v_recordrow.task_type;
v_no_of_attempts := v_recordrow.attempt;
v_result := 0;
IF (v_no_of_attempts IS NULL) THEN
v_no_of_attempts:=0;
END IF;
END LOOP;
CLOSE c;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_datatablekey:=-1;
v_tasktype:='';
v_key:=-1;
v_no_of_attempts:=0;
v_result :=-1;
v_error_desc:='No Rows Found';
WHEN OTHERS THEN
DBMS_OUTPUT.put_line ('Exception Occurred');
v_datatablekey:=0;
v_tasktype:='';
v_key:=0;
v_no_of_attempts:=0;
v_result := -2;
v_error_desc := SQLERRM;
ROLLBACK;
END;
The purpose of the FOR UPDATE syntax is to lock records we want to update at some point in the future. We want to get the records now so that we can be sure our subsequent update won't fail because another session has locked the records.
That's not what your code does. Instead it selects the record, updates it and then issues a commit. The commit ends the transaction, which releases the lock. Your process would work the same without the FOR UPDATE.
Now we have your additional requirement: if a queued record for a given workid is being processed no other record for the same workid can be processed by another session. You say that all instances of a workid have the same values of queue status or manager. So that means the initial SELECT FOR UPDATE grabs all the records you want to lock. The snag is the SKIP LOCK allows other sessions to update any other records for that workid (only the first record is actually locked because that's the only one you've updated). Not that it matters sd the commit releases those locks.
The simplest solution would be to remove the SKIP LOCKED and the COMMIT. That would keep all the related records locked up until your processing transaction commits. But this may create further problems elsewhere.
Concurrent programming is really hard to get right. It's an architectural problem; you can't solve it at the level of an individual program unit.
I have a simple scenario where i have attached listener on dynamic path in Firebase.
a
.. b
..... c (dynamic multiple nodes)
..... c1
......c2
ref.child(a).child(b).child(c).on('child_changed',onChildChange);
I am removing some c nodes as per some conditions, so do i need to listener from it or will it be automatically removed.
Just the opposite of the .on:
ref.child(a).child(b).child(c).off('child_changed',onChildChange);
https://www.firebase.com/docs/web/api/query/off.html
You have to call remove on a reference to remove data.
firebase.database().ref('a/b/c').remove();
child events are used to monitor data in the list, you can use these events to know when data has been added, modified or removed.
In your case, you should use child_removed, note that using this event has nothing to do with data removal. Whenever you work with lists, it is recommended that you use all 3 child events in conjunction.
firebase.database().ref('a/b/c').on('child_removed', function(data) {
//Data has been deleted, do something here!
});
I try to use document functions like HAS, UNSET etc. (hopefyully) like they are described in the documentation. Unofortunately the lead to Syntax error 1501. I also see that they do NOT get highlighted in the AQL editor like the other signal words do.
Here is one example (which I also tested on the tutorial server):
FOR u IN users
LIMIT 1
UNSET(u, "birthday")
RETURN u
Does anybody sees what's wrong?
An AQL function cannot appear on the top-level of an AQL. The only things allowed on the top-level are statements such as FOR, FILTER, RETURN, LET, COLLECT, SORT, INSERT etc.
If a function should be executed, it's return value should be captured inside a LET statement for further processing, or, if no further processing is required, the function can be called in a RETURNs expression, e.g.
FOR u IN users
LIMIT 1
RETURN UNSET(u, "birthday")
OK, OK ... after writing this I got it: One has to assign this to something. e.g.
FOR u IN users
LIMIT 1
LET tmp = UNSET(u, "birthday")
RETURN tmp
Sorry for posting it ... but I keep it in, maybe other beginners do the same mistake :-)
This may be helpful for other users: The UNSET function does not actually replace the document in the collection. To do this, you need to run
FOR u IN users
LIMIT 1
LET u_new = UNSET(u, "birthday")
REPLACE u WITH u_new IN users
I have two types of entities: Subjects and Correspondents. They're both related to each other via a to-many relationship. I want to have a single fetchRequest that I can pass to a NSFetchedResultsController which will have it return:
All of the subjects which have more than one correspondent.
All of the correspondents which have subjects that only they are apart of.
After trying a variety of things, I decided that it's not possible to make a single fetch that returns both Subjects and Correspondents, so I turned to StackOverflow and found someone else suggesting that you have a single entity which does nothing more than have relationships with the two entities you'd like to return.
So I created a third type of entity, which I called Folders, which each have an optional to-one relationship with a Subject and a Correspondent. It also has two attributes, hasCorrespondent and hasSubject, which are booleans keeping track of whether Subject or Correspondent are set.
So I wrote this predicate which returns the Folder entities:
(hasCorrespondent == 1 AND ANY correspondent.subjects.correspondents.#count == 1)
OR
(hasSubject == 1 AND subject.correspondents.#count >= 1)
The issue with this is I'm getting an error:
Terminating app due to uncaught exception 'NSInvalidArgumentException',
reason: 'Unsupported function expression count:(correspondent.subjects.correspondents)
So, any suggestions as to what I'm doing incorrectly? How can I accomplish what I'd like? Are there any additional details I should share?
UPDATE
With Martin's suggestion, I changed the offending portion to this:
SUBQUERY(correspondent.subjects, $s, $s.correspondents.#count == 1).#count > 0
But that generated a new error:
Keypath containing KVC aggregate where there shouldn't be one;
failed to handle $s.correspondents.#count
After googling around, I found suggestions to add a check that the collection being enumerated over had at least one object, but modifying the offending line to this didn't change my error messages (so as far as I can tell it did nothing):
correspondent.subjects.#count > 0 AND
SUBQUERY(correspondent.subjects, $s, $s.correspondents.#count == 1).#count > 0
I had a similar problem. I created an additional field countSubentity. And when add/remove subentity change this field. And predicate looks:
[NSPredicate predicateWithFormat:#"SUBQUERY(subcategories, $s,
$s.countSubentity > 0).#count > 0"];
This work around has been less than ideal, but it seems to get the job done:
1 - I added an attribute to subject called count.
2 - I set (part of) my expression to
ANY correspondent.subjects.count == 1
Note that no SUBQUERY() was necessary for this workaround.
3 - Everytime I modify a subject's correspondents set, I run
subject.count = #(subject.correspondents.count);
I'm still hoping for a better solution and will be happy to mark any (working) better solution as correct.
I've read a bit about CouchDB and I'm really intrigued by the fact that it's "append-only". I may be misunderstanding that, but as I understand it, it works a bit like this:
data is added at time t0 to the DB telling that a user with ID 1's name is "Cedrik Martin"
a query asking "what is the name of the user with ID 1?" returns "Cedrik Martin"
at time t1 an update is made to the DB telling: "User with ID 1's name is Cedric Martin" (changing the 'k' to a 'c').
a query asking again "what is the name of the user with ID 1" now returns "Cedric Martin"
It's a silly example, but it's because I'd like to understand something fundamental about CouchDB.
Seen that the update has been made using an append at the end of the DB, is it possible to query the DB "as it was at time t0", without doing anything special?
Can I ask CouchDB "What was the name of the user with ID 1 at time t0?" ?
EDIT the first answer is very interesting and so I've got a more precise question: as long as I'm not "compacting" a CouchDB, I can write queries that are somehow "referentially transparent" (i.e. they'll always produce the same result)? For example if I query for "document d at revision r", am I guaranteed to always get the same answer back as long as I'm not compacting the DB?
Perhaps the most common mistake made with CouchDB is to believe it provides a versioning system for your data. It does not.
Compaction removes all non-latest revisions of all documents and replication only replicates the latest revisions of any document. If you need historical versions, you must preserve them in your latest revision using any scheme that seems good to you.
"_rev" is, as noted, an unfortunate name, but no other word has been suggested that is any clearer. "_mvcc" and "_mcvv_token" have been suggested before. The issue with both is that any description of what's going on there will inevitably include the "old versions remain on disk until compaction" which will still imply that it's a user versioning system.
To answer the question "Can I ask CouchDB "What was the name of the user with ID 1 at time t0?" ?", the short answer is "NO". The long answer is "YES, but then later it won't work", which is just another way of saying "NO". :)
As already said, it is technically possible and you shouldn't count on it. It isn't only about compaction, it's also about replication, one of CouchDB's biggest strengths. But yes, if you never compact and if you don't replicate, then you will be able to always fetch all previous versions of all documents. I think it will not work with queries, though, they can't work with older versions.
Basically, calling it "rev" was the biggest mistake in CouchDB's design, it should have been called "mvcc_token" or something like that -- it really only implements MVCC, it isn't meant to be used for versioning.
Answer to the second Question:
YES.
Changed Data is always Added to the tree with a higher revision number. same rev is never changed.
For Your Info:
The revision (1-abcdef) ist built that way: 1=Number of Version ( here: first version),
second is a hash over the document-content (not sure, if there is some more "salt" in there)...
so the same doc content will always produce the same revision number ( with the same setup of couchdb) even on other machines, when on the same changing-level ( 1-, 2-, 3-)
Another way is: if you need to keep old versions, you can store documents inside a bigger doc:
{
id:"docHistoryContainer_5374",
"doc_id":"5374",
"versions":[
{"v":1,
"date":[2012,03,15],
"doc":{ .... doc_content v1....}
},
{"v":2,
"date":[2012,03,16],
"doc":{ .... doc_content v2....}
}
]
}
then you can ask for revisions:
View "byRev":
for (var curRev in doc.versions) {
map([doc.doc_id,doc.versions[curRev].v],doc.versions[curRev]);
}
call:
/byRev?startkey=["5374"]&endkey=["5374",{}]
result:
{ id:"docHistoryContainer_5374",key=[5374,1]value={...doc_content v1 ....} }
{ id:"docHistoryContainer_5374",key=[5374,2]value={...doc_content v2 ....} }
Additionaly you now can write also a map-function that amits the date in the key, so you can ask for revisions in a date-range
t0(t1...) is in couchdb called "revision". Each time you change a document, the revision-number increases.
The docs old revisions are stored until you don't want to have old revisions anymore, and tell the database "compact".
Look at "Accessing Previous Revisions" in http://wiki.apache.org/couchdb/HTTP_Document_API