OpenCMIS TCK Basics Test Group never ends - cmis

I have developed a CMIS server for a custom repository, and when I run TCK tests on it, Root Folder Test takes forever and I have never been able to wait until the end of it.
The blame is actually on getObjectParents implementation. When I put a breakpoint there, I realize that TCK tests have created too many documents on the root folder, and they keeps calling getObjectParents for each of them. It takes so long that I have never managed to wait till the end to see what happens next! I don't think there's an infinite loop firstly because any time I pause I do stop at my getObjectParents breakpoint and each time I get a different document id (at least about 50 of them that I managed to track).
Also as a P.S., if I intentionally break my implementation of getObjectParents and throw a CmisRuntimeException, TCK tests will run and pass Ok.
Any similar experience or solution is really appreciated.

I don't think this is a TCK issue.
Have you checked how much time your getObjectParents implementation needs to respond for one document?
Some clients call this method frequently. If it constantly takes too long (>2 sec), clients may not be able to work with your repository.

Related

Run multiple copies of Speedy or PersistentPerl to be called from Tomcat

I have a modern webapp running under Tomcat, which often needs to call some legacy perl code to get some results. Right now, we wrap these in a call to Runtime.getRuntime().exec() which is working fine.
However, as the webapp gets busier we are noticing that often the perl is timing out and we need to control this.
I am using commons-pool to ensure that only X number of copies can be run at a time, and threads will queue up nicely for a perl instance when they need one, timing out after Y seconds and returning an error (this is fine, the client will just retry).
However we still have the problem that Perl takes a long time to start up, interpret the script, execute and return. At busy times we are doing this 30-50 times per second. It's a beefy machine but it's starting to struggle.
I have read up on Speedy and PersistentPerl and am considering holding open a copy of this in memory for each object in my pool, so that we do not need to open and close the Perl each time.
Is this a good idea? Any tips for how to go about doing this?
Those approaches should reduce the overhead from the start up time of your script. If the script is something that can be run as a CGI program then you might be better offer making it work with Plack and running it with a PSGI server. Your Tomcat application could collect and send the request parameters to your script and/or "web application" running in the background.

How to find the time when a Puppet manifest is executed

I'm wondering if anyone knows a good way to get the date and time when a portion of code in a Puppet manifest is actually executed. Sometimes my manifests take a long time to run, and I need to schedule a task to occur soon after the end of the run, no matter when that occurs.
I have tried the time() function, setting a variable using generate() (using the date function on the Puppet master), and even creating a custom fact, but everything I've tried gets evaluated when the manifests are parsed on the server, rather than when they actually execute on the client.
Any ideas? The clients are all Windows, FWIW.
Thanks in advance!
I am not sure I understand what you mean, but you can't get this information during catalog compilation (obviously), so you can't use it to change the way the catalog will be applied.
If you need to trigger another process on the same host, then you should use any IPC mechanism you have available. You can exec anything, and have it happen just after any other resources is applied, so it is just a matter of finding the proper command.

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Core Data: awakeFromFetch Not Getting Called For Unsaved Contexts

First, let me illustrate the steps to reproduce the 'bug'.
Create a new NSManagedObject.
Fault the managed object using refreshObject:mergeChanges:NO - At this time, the didTurnIntoFault notification is received by the object.
'Unfault' the object again by using willAccessValueForKey:nil - At this time, the awakeFromFetch notification is supposed to be received BUT NO NOTIFICATION COMES. All code relying of it firing fails, and the bread in the toaster burns :)
The interesting thing is that if I 'save' the managed object context before performing step 2, everything works okay and the awakeFromFetch notification comes as expected.
Currently the workaround that I am using is 'saving' the context at regular intervals, but that is more of a hack since we actually need to save the context once (when the application terminates).
Googling has so far returned nothing concrete, except a gentleman here that seems to have run into the same problem.
So my question is twofold - Is this really a bug, and if it is, then what other walkarounds (sic) do you suggest.
EDIT: THIS IS NOT A BUG BUT THAT WAS JUST ME BEING STUPID. See, if I turn an object to fault without saving it, then there is no history of the object to maintain. So in this case (i.e for an unsaved object) there is no logical concept of awakeFromFetch (since it was never saved). Please do let me know if I am still getting it all mixed up.
Anyways, turns out my 'actual' problem was somewhere else - hidden well behind 2 gotcha's
If you use refreshObject:mergeChanges:NO to turn an object to fault in order to break any retain cycles that core data might have established, you have to do the same for the child objects also - Each child object which might have gotten involved in a cyclic retain with someone else will have to be manually faulted. What I had (wrongly) assumed was that faulting the parent will automatically break the cycles amongst the children.
The reverseTransform function of your custom transformers will NOT be called when such a object (i.e. which has been forcefully faulted) is resurrected by firing a fault on it. This in my eyes IS a bug, since there is no other way for me to know when the object is alive again. Anyways, the workaround in this case was to set the staleness interval to an arbitrarily low value so that core data skips its cache and always calls the reverseTransform function to resurrect the object. Better suggestions are welcome.
it really has been one of those days :)

Using TDD to drive out thread-safe code

What's a good way to leverage TDD to drive out thread-safe code? For example, say I have a factory method that utilizes lazy initialization to create only one instance of a class, and return it thereafter:
private TextLineEncoder textLineEncoder;
...
public ProtocolEncoder getEncoder() throws Exception {
if(textLineEncoder == null)
textLineEncoder = new TextLineEncoder();
return textLineEncoder;
}
Now, I want to write a test in good TDD fashion that forces me to make this code thread-safe. Specifically, when two threads call this method at the same time, I don't want to create two instances and discard one. This is easily done, but how can I write a test that makes me do it?
I'm asking this in Java, but the answer should be more broadly applicable.
You could inject a "provider" (a really simple factory) that is responsible for just this line:
textLineEncoder = new TextLineEncoder();
Then your test would inject a really slow implementation of the provider. That way the two threads in the test could more easily collide. You could go as far as have the first thread wait on a Semaphore that would be released by the second thread. Then success of the test would ensure that the waiting thread times out. By giving the first thread a head-start you can make sure that it's waiting before the second one releases.
It's hard, though possible - possibly harder than it's worth. Known solutions involve instrumenting the code under test. The discussion here, "Extreme Programming Challenge Fourteen" is worth sifting through.
In the book Clean Code there are some tips on how to test concurrent code. One tip that has helped me to find concurrency bugs, is running concurrently more tests than the CPU has cores.
In my project, running the tests takes about 2 seconds on my quad core machine. When I want to test the concurrent parts (there are some tests for that), I hold down in IntelliJ IDEA the hotkey for running all tests, until I see in the status bar that 20, 50 or 100 test runs are in execution. I follow in Windows Task Manager the CPU and memory usage, to find out when all the test runs have finished executing (memory usage goes up by 1-2 GB when they all are running and then slowly goes back down).
Then I close one by one all the test run output dialogs, and check that there were no failures. Sometimes there are failed tests or tests which are in deadlock, and then I investigate them until I find the bug and have fixed it. That has helped me to find a couple of nasty concurrency bugs. The most important thing, when facing an exception/deadlock that should not have happened, is always assuming that the code is broken, and investigating the reason ruthlessly and fixing it. There are no cosmic rays which cause programs to crash randomly - bugs in code cause programs to crash.
There are also frameworks such as http://www.alphaworks.ibm.com/tech/contest which use bytecode manipulation to force the code to do more thread switching, thus increasing the probability of making concurrency bugs visible.
When I test drove an implementation that needed to be thread safe recently I came up with the solution I provided as an answer for this question. Hope that helps even though there are no tests there. Hope link is OK raher than duplicating teh answer...
Chapter 12 of Java Concurrency in Practice is called "Testing Concurrent Programs". It documents testing for safety and liveness, but says this is a hard subject. I am not sure this problem is solvable by the tools of that chapter.
Just off the top of my head could you compare the instances returned to see if they are indeed the same instance or if they are different? That's probably where I would start with C#, I would imagine you can do the same in java

Resources