guys.
I have a question regarding usage of loggers in log4net. When choosing between logger per class (static readonly field) and logger per instance (readonly field) what is a better approach? Personally, the only disadvantage I see when having logger per class is its instantiation:
log4net.LogManager.GetLogger(
System.Reflection.MethodBase.GetCurrentMethod().DeclaringType
It doesn't look very nice because of reflection.
If I create logger in the following way:
log4net.LogManager.GetLogger(typeof(MyClass))
there are chances that accidentally I will make copy/paste errors and instead of typeof(MyClass) I can supply typeof(SomeOtherClass), which is bad.
When using logger per instance, I can use:
log4net.LogManager.GetLogger(this.GetType())
This approach doesn't use reflection and is free of copy/paste errors.
Are there any other thoughts on this?
Besides the fact that it would be better to use dependency injection, I think your approach is good. I have used this approach myself in the past.
Related
Are there any creational design patterns that allow for completely new objects (as in newly written) to be instantiated without having to add a new statement somewhere in existing code?
The main problem to solve is how to identify the class or classes to instantiate. I know of and have used three general patterns for discovering classes, which I'll call registration, self-registration and discovery by type. I'm not aware of them having been written up in a formal pattern description.
Registration: Each class that wants to be discovered is registered somewhere where a framework can find it:
the class name is put in an environment variable or Java system property, in a file, etc.
some code adds the class or its name to a singleton list early in program execution
Self-registration: Each class that wants to be discovered registers itself, probably in a singleton list. The trick is how the class knows when to do that.
The class might have to be explicitly referred to by some code early in the program (e.g. the old way of choosing a JDBC driver).
In Ruby, the code that defines a class can register the class somewhere (or do anything else) when the class is defined. It suffices to require the file that contains the class.
Discovery by type: Each class that wants to be discovered extends or implements a type defined by the framework, or is named in a particular way. Spring autowiring class annotations are another version of this pattern.
There are several ways to find classes that descend from a given type in Java (here's one SO question, here's another) and Ruby. As with self-registration, in languages like those where classes are dynamically loaded, something has to be done to be sure the class is loaded before asking the runtime what classes are available.
One things which I think here is that someone needs to do a new to your newly written Class. If you are not doing that then may be some framework needs to do that.
I could remember something similar which I did in one of my side projects using Java Spring . I had an interface and multiple implementations to it. My project required to iterate over all the implementations do some processing. Now for this even I was looking for some solution with which I would not have to manually do the instantiation of that particular class or some explicit wiring of that particular class. So Spring facilitated me to do that through #Autowired annotation. It injected all the implementation of that interface on the fly. Eg :-
#Autowired
private List<IMyClass> myClassImplementations;
Now in the above example I can simply iterate over the list of implementations injected and I would not have to do instantiation of my new implementation every time I write a new one.
But I think in most of the cases it would be difficult to use this approach (even though it fits in my case). I would rather go with a Factory pattern in general case and try to use that for creating new instances. I know that would require new but in my perception engineering it in a way that its object is automatically created and injected is a bit an extra overhead.
I'm learning the repository pattern. I've implemented it in a sample project. But I don't know what the main benefit of repository is.
private IStudentRespostiry repository = null;
public StudentController()
{
this.repository = new StudentRepository();
}
public StudentController(IStudentRespostiry repository)
{
this.repository = repository;
}
StudentRepository class can also access the method by creating object of the class.
StudentRepository obj = new StudentRepository();
Anyone have Solid reason for this. One I know in hiding data.
The main reasons for repository people cite are testability and modularity. For testability, you can replace the concrete object with mock one where the repository is used. For modularity you can create different repository, that for example uses different data story.
But I'm highly skeptical of modularity, because repositories are often highly leaky abstractions and changing backed data store is extremely rare. Meaning that something that should be as simple as creating a different instance turns into complete rewrite. Defeating the purpose of repository.
There are other ways to achieve testability with your data store without worrying about leaky abstractions.
As for your code examples. In first case, first constructor is "default" one and second is probably for either IoC or testing with mocks. IMO there should be no "default" one, because it removes the purpose of actually having an IoC.
The second statement allows dependency injection. This means you can use an IoC container to inject the correct implementation.
So for example, in your unit tests you could inject an in memory database (see mocking) while your production code would use an implementation which hits the actual database.
I have a number of Spring integration tests that all somehow need to use data (from an in-memory database).
The tests all require subtly different data sets so that as of now I use plain Spring #Component helper classes (located in the test package hierarchy) that insert the data right from the test methods as show below:
#Autowired
private SomeHelper someHelper;
#Test
public void someIntegrationTest(){
//Arrange
someHelper.insertSomeData();
...
//Act
...
//Assert
...
}
I find this solution not very clean nor very beautiful and I am seeking to improve it or replace it with an alternative solution....
Would it be a good idea to implement a hierarchy of TestExecutionListeners where common required data would be inserted by the base class and data specific for the individual tests would be inserted by the subclasses of the base class??
If relying on TestExcutionListener in order to insert test data is not a good idea, then what could be a reliable and viable alternative?
Take a look at Spring Test DbUnit (and the related blog announcement).
I think it will satisfy your needs.
Regards,
Sam
I've got a class set up kind of like this:
class ParentClass{
// Some other fields
Set<ChildClass> children
}
I'm wanting to use groovy.sql.Sql to keep the related ChildClass objects appropriately persisted in relationship to the ParentClass. I've used ORM tools like Hibernate before, and I'd rather stick to just using groovy.sql.Sql if at all possible.
I'm wondering if groovy.sql.Sql has any sort of convenience helpers for keeping child collections syncronized? I don't mind writing closures and whatnot to do a comparison of the "currently persisted" set vs the "newly persisted" set, to decide what to add and remove, but I was kind of hoping groovy already took care of that for me.
As far as I know groovy has no such mechanism. I suppose You write Your own DSL for that but I see it as rather complicated (and prone to DB scheme changes) and don't know if the game is worth the candle.
If You don't like using ORM tools (I also always hesitate to use them) maybe try something that isn't an ORM tool but helps to avoid plain SQL in groovy code: jOOQ (as far as I know there's no relationship handling in jOOQ). Haven't used it yet but still want to try.
Part of my problem here is using the proper vocabulary, so I apologize in advance for what might be a simple matter of terminology.
Suppose I have a Person interface, and a PersonBean class that implements that interface.
Suppose further I have a producer method somewhere (annotated #Produces) that returns a Person. Internally it returns a new PersonBean, but that's neither here nor there.
Finally, suppose I have another CDI bean somewhere with an injection point defined like this:
#Inject
private Person person;
Assuming I have all my beans.xml files in place etc. and have bootstrapped Weld or another CDI-1.0-compliant environment, as this all stands I will get an ambiguous definition error. This makes sense: Weld will find my PersonBean as a candidate for injection (it could just call the constructor) and will find the output of my producer method as a candidate for injection.
What I'd like to do is somehow force the production of Person instances in this application to always route through the producer method.
I understand I could invent some qualifier somewhere and make the producer method produce Person instances that are qualified by that qualifier. If I do that, and change my injection point to include the qualifier, then obviously there's only one source of these qualified injectables (namely my producer method), so voila, problem solved.
But suppose I don't want to invent some bogus qualifier. (I'm not saying this is the case; just trying to more deeply understand the issues.) What are my options? Do I have any? I suppose I could put #Typed(Object.class) on the PersonBean to make it so that it was not seen as a Person by CDI....
Any ideas welcomed, including pointers to documentation, or better ways to understand this. Thanks.
Annotate you PersonBean as #Alternative then it will use the producer method.
From digesting several different answers here and elsewhere, the solution I've adopted is to use the #Typed annotation with a value of Object.class on my bean. This means that it will only be eligible to be injected into fields that are declared like this:
#Inject
private Object something;
...which thankfully prove to be pretty much nonexistent. :-)
What I'd like to do is somehow force the production of Person
instances in this application to always route through the producer
method.
Seam solder has a solution for this.
I'm not 100% sure how this will develop with the merge of Seam 3 and Deltaspike (the page is so 90s, but the content rocks :-), but putting Solder in your classpath is certainly a safe bet.
Oh, and as far as I know a comparable mechanism made it into the CDI 1.1 spec.