I have the following setup (on felix osgi framework 4.4.0):
a Bundle B with a DS component C which has a reference R (aQute.bnd.annotation.component.Reference) on a service (provided by some other bundle).
When B is started, a new Component C is instantiated and the reference R is injected properly...
Then I just stop the bundle B, expecting that, if I start it again, that either:
(A) a new Component C' is instantiated and R is injected in C' or
(B) the existing component C is reused and R is injected in C.
What happens is that I have a mixture from (A) and (B) which doesn't work:
a new Component C' is instantiated but R is injected in C, not C'.
My questions are:
Should I expect (A) or (B) to happen?
Or: should something else happen?
Is it - potentially - a bug in the framework?
Or: did I get something completely wrong in the first place?
Thing is, that my code is too complicated for a simple example, but I need someone to point me to the right direction... in this special case, I have problems interpreting the OSGi specs about declarative services. Is it even defined whether or not a new instance (of Component C) has to be created - or the old one could be reused?
Thanks in advance for any hints!
The correct behavior is (A). Sounds like to you need to file a bug against your DS implementation (not the framework).
When a bundle is stopped, all its components must be deactivated and then discarded never to be reused. So when the bundle is restarting, the DS impl must create new component instances.
Related
I am new to Haxe and only plans to deploy for the web.
So I have a class A which has a method name doThis(). On class B, I inherited class A. I override doThis() on class B. When I check on the debugger, class A doThis() is being called and then class B doThis() is called as well.
My intuition is, I have overridden the methods explicitly and the only way I can call the parent is via a super.doThis() but it seems it does this automatically. I only want the version of doThis() to be B. Not A's.
Any thought on why it behaves like this? I think I a missing something here.
Thanks!
Without any further information, I'd bet good money that you have your debugger breakpoints on the definition of doThis, when you meant to put them in the invocation of doThis (inside the function body).
Other possible (but less likely) reasons:
A macro function is inserting a super.doThis() call
A modified Haxe compiler or JS generator is emitting a super.doThis() call.
Unless the function you are overriding is the constructor function new, calling the parent (super) function is optional. Furthermore, you can stipulate when the parent function is called by adjusting when you call super.doThis().
To illustrate, here is code that only runs the child classes function on try.haxe. It sounds like you may have already tried a similar approach, so ensure that you aren't missing some code that you may not be aware is calling the super function.
I think you're doing the right thing.
perhaps add some println to the parent function and the child function to make sure what you describe is what actually happens.
In other words, the debugger is not just playing with you.
I know how to add a Bean to a CDI container during AfterBeanDiscovery. My problem is that what I really need to do is the equivalent of adding a new producer method with the equivalent of a particularly qualified parameter.
That is, I'd like to somehow programmatically create several of these:
#Produces
#SomeQualifier("x")
private Foo makeFoo(#SomeQualifier("x") final FooMaker fm) {
return fm.makeFoo();
}
...where the domain over which SomeQualifier's value element ranges is known only at AfterBeanDiscovery time. In other words, some other portable extension has installed two FooMaker instances into the container: FooMaker-qualified-by-#SomeQualifier("x") and FooMaker-qualified-by-#SomeQualifier("y"). Now I need to do the equivalent of making two producer methods to "match" them.
Nonbinding is not an option; I want this resolution to take place at container startup, not at injection time.
I am aware of BeanManager's getProducerFactory method, but the dozens if not hundreds of lines of gymnastics I'd have to go through to add the right qualifier annotation on each AnnotatedParameter "reachable" from the AnnotatedMethod I'd have to create by hand (to avoid generics issues) make me think I'm way off the beaten path here.
Update: So in my extension, I have created a private static method that returns a Foo, and has a FooMaker parameter. I've wrapped this in a hand-tooled AnnotatedMethod that reports SomeQualifier("x") etc. in its getAnnotations() method, and also reports SomeQualifier("x") etc. from its AnnotatedParameter's getAnnotations() method. Then I got a ProducerFactory from the BeanManager and feed that into a new Bean that I create, where I use it to implement the create and destroy methods. Everything compiles and so forth just fine.
(However, Weld (in particular) blows up with this usage, which leads me to think that I'm doing Really Bad Thingsā¢.)
I has one method to call another #Cacheable method like this:
public ItemDO findMethod2(long itemId) {
this.findMethod1(itemId);
...
}
#Cacheable(value = "Item", key="#itemId", unless="#result == null")
public ItemDO findMethod1(long itemId) {
...
}
The cache works well if I call the findMethod1() directly. However, when I call findMethod2() the the cache on findMethod1() is totally ignored.
Could it be the trick made by JVM which inline the findMethod1() into findMethod2()?
Does anyone come across similar issue?
Thanks!
It's no JVM trick, i.e. findMethod1() is not being inlined inside findMethod2() or anything of that nature.
The problem is your code is bypassing the "Proxy" that Spring is creating around your application class (containing findMethod1()) for the #Cacheable annotation.
Like Spring's Transactional annotations and underlying infrastructure, given an interface, by default Spring will create a JDK Dynamic Proxy (AOP style) to "intercept" the method call and apply the "advice" (as determined by the type of annotation, in this case, caching). However, once the target object is invoked from the interceptor (Proxy) acting on behalf of the target object to apply the advice, the Thread is now executing in the context of the target object so any subsequent method invocations from within the target object are occurring directly on the target object itself.
It looks a little something like this...
caller -> Proxy -> findMethod2() -> findMethod1()
Ideally what you want is this...
caller -> Proxy -> findMethod2() -> Proxy -> findMethod1()
However, the Thread is already executing in the context of the "target" object once inside findMethod2(), so you end up with the first call stack.
The Spring doc explains it better here.
The document goes on to point out solutions to this problem, the most favorable is refactoring your code to ensure the caller is going through the Proxy interceptor for the 2nd method invocation (i.e. findMethod1()).
I also gather another solution to this problem would be to use full-blown AspectJ, using a compiler and byte-code weaver during your application build process to modify the actual target object so that subsequent invocations from within the target object intercept and apply the advice accordingly.
See the Spring docs on the trade-offs between Spring AOP and full AspectJ, as well as how to use full AspectJ in your Spring applications.
Hope this helps.
Cheers!
Other solution I find handy is using #Resource and then invoking the target (method1 in your case) using that resource reference with https://stackoverflow.com/a/48867068/2488286
I'm in the process of trying to migrate a R# extension project from R# 6 to R# 8. (I've taken over a project that someone wrote, and I'm new to writing extensions.)
In the existing v6 project there is a class that derives from RenameWorkflow, and the constructor used to look like this;
public class RenameStepWorkflow : RenameWorkflow
{
public RenameStepWorkflow(ISolution Solution, string ActionId)
: base(Solution, ActionId)
{
}
This used to work in R# SDK v 6, but now in V8, RenameWorkflow no longer has a constructor that takes Solution and actionId. The new constructor signature now looks like this;
public RenameWorkflow(
IShellLocks locks,
SearchDomainFactory searchDomainFactory,
RenameRefactoringService renameRefactoringService,
ISolution solution,
string actionId);
now heres my problem that I need help with (I think)
I've copied the constructor, and now the constructor of this class has to satisfy these new dependancies. Through some digging I've managed to find a way to satisfy all the dependencies, except for 'SearchDomainFactory'. The closest I can come to instantiating via the updated constructor is as follows;
new RenameStepWorkflow(Solution.Locks, JetBrains.ReSharper.Psi.Search.SearchDomainFactory.Instance, RenameRefactoringService.Instance, this.Solution, null)
All looks good, except that JetBrains.ReSharper.Psi.Search.SearchDomainFactory.Instance is marked as Obsolete, and gives me a compile error that I cannot work around, even using #pragma does not allow me to compile the code. The exact error message I get when I compile is Error 16 'JetBrains.ReSharper.Psi.Search.SearchDomainFactory.Instance' is obsolete: 'Inject me!'
Obvious next question..ok, how? How do I 'inject you'? I cannot find any documentation over this new breaking change, in fact, I cannot find any documentation (or sample projects) that even mentions DrivenRefactoringWorkflow or RenameWorkflow, (the classes that now require the new SearchDomainFactory), or any information on SearchDomainFactory.Instance suddenly now obsolete and how to satisfy the need to 'inject' it.
Any help would be most appreciated! Thank you,
regards
Alan
ReSharper has its own IoC container, which is responsible for creating instances of classes, and "injecting" dependencies as constructor parameters. Classes marked with attributes such as [ShellComponent] or [SolutionComponent] are handled by the container, created when the application starts or a solution is loaded, respectively.
Dependencies should be injected as constructor parameters, rather than using methods like GetComponent<TDependency> or static Instance properties, as this allows the container to control dependency lifetime, and ensure you're depending on appropriate components, and not creating leaks - a shell component cannot depend on a solution component for instance, it won't exist when the shell component is being created.
ReSharper introduced the IoC container a few releases ago, and a large proportion of the codebase has been updated to use it correctly, but there are a few hold-outs, where things are still done in a less than ideal manner - static Instance properties and calls to GetComponent. This is what you've encountered. You should be able to get an instance of SearchDomainFactory by putting it as a constructor parameter in your component.
You can find out more about the Component Model (the IoC container and related functionality) in the devguide: https://www.jetbrains.com/resharper/devguide/Platform/ComponentModel.html
I have two classes (class A and B) both marked with [Binding]. Currently I'm using a class per feature. Classes A and B both have a step that looks like this:
[Given(#"an employee (.*) (.*) is a (.*) at (.*)")]
public void GivenAnEmployeeIsAAt(string firstName, string lastName, string role, string businessUnitName)
When I run the scenario for the features defined in class A, and the test runner executes the step indicated above, the matching step in class B gets executed instead.
Are "Steps" global as well? I thought only the "hook" methods are global, i.e. BeforeScenario, AfterScenario. I do not want this behavior for "Given", "Then", and "When". Is there any way to fix this? I tried putting the two classes in different namespaces and this didn't work either.
Also, am I potentially misusing SpecFlow by wanting each "Given" to be independent if I put them in separate classes?
Yes Steps are (per default) global. So you will run into trouble if you define two attributes that have RegExps that matches the same Step. Even if they are in separate classes.
Being in separate classes, or other placement (other assembly even) doesn't have anything to do with how SpecFlow groups it - it's just a big list of Given's, When's and Then's that it try to match the Step against.
But there's a feature called Scoped Steps that solves this problem for you. Check it out here: https://github.com/techtalk/SpecFlow/blob/master/Tests/FeatureTests/ScopedSteps/ScopedSteps.feature
The idea is that you put another attribute (StepScope) on your Step Defintion method and then it will honor that scoping. Like this for example:
[Given(#"I have a step definition that is scoped to tag (?:.*)")]
[StepScope(Tag = "mytag")]
public void GivenIHaveAStepDefinitionThatIsScopedWithMyTag()
{
stepTracker.StepExecuted("Given I have a step definition that is scoped to tag 'mytag'");
}
... or to scope an entire step definition class to a single feature:
[Binding]
[StepScope(Feature = "turning on the light should make it bright")]
public class TurningOnTheLightSteps
{
// ...
}
This step definition is using a StepScope for a tag. You can scope your steps by:
Tag
Scenario title
Feature title
Great question! I hadn't fully understood what that was for until now ;)