I walk the first steps in codeigniter4.
Now I ask myself what are the big differences between a "Model" where I do at first all database related things, a "Helper" where I defined a set of functions or a "Libary"?
In which cases should I create my own libary, helper, model?
The CI4 Docu won't give me a answer for that, so I hope someone can explain it for me (and other ones)
The documentation is pretty straight forward when it comes to Models, there's really no caveats there. A Model is a class that represents a single table in the database and it provides to you a wide variety of related functionality: built-in CRUD methods, the ability to save Entities, transparent use of Query Builder methods, etc.
In general, you would typically have one Model class per database table that you're working with. That being said, you do not necessarily need Models in order to work with a database; however if you expect to need the functionality a Model provides, it can be used to save you the work of doing it yourself.
The documentation is indeed far more opaque on the differences between Libraries and Helpers. As such, I've found the most objective measure of difference to be in how CodeIgniter itself utilizes them, and the distinction is rather subtle:
Libraries provide their functionality through methods that exist within the namespace of the class they're defined in.
Helpers provide their functionality through functions that exist within the namespace of the importing class.
(NB: In PHP, a method is simply the name for a function that's defined within a class)
In other words, Libraries are typically non-static classes that (like all non-static classes) must be 'constructed' before use. The methods defined within that class reside within the namespace of the class itself, not the class they're called from.
For example, to gain access to the current Session object, I retrieve an instance of the Session Library class: $session = session(); Using $session, I can then invoke methods provided by that Session class, like $session->destroy().
On the other hand, Helpers are typically just a collection of functions that are imported into the namespace of the importing class itself. They are called in the current context and their use is not predicated upon the use of an object.
For example, if I loaded the Array Helper (helper('array');), it would grant me access to a handful of functions that I could call directly from the current context (e.g. $result = array_deep_search(...)). I didn't have to create an object first, that function was imported directly into my current namespace.
This distinction could matter for a couple of reasons, the biggest of which is probably naming conflicts. If you have two Helpers, each with an identically-named function, you can't import both of those functions at the same time. On the other hand, you could have one hundred different classes with the destroy() method defined, because each of those method definitions resides within the namespace of the defining class itself and is invoked through an instance of that specific class.
You may notice that all of CI's Helper functions are prefixed with the name of the Helper itself, e.g. 'array' or 'form'; this is to prevent the aforementioned naming conflict.
This probably doesn't answer when you want to use one or the other, and in truth that really comes down to your opinion. In the end, it's all just namespacing. You can put things wherever you want, as long as the code that needs it knows where to look for it. I personally try to follow CI's lead on things, so I tend to follow their distinction between the two.
Related
I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters
Are there any creational design patterns that allow for completely new objects (as in newly written) to be instantiated without having to add a new statement somewhere in existing code?
The main problem to solve is how to identify the class or classes to instantiate. I know of and have used three general patterns for discovering classes, which I'll call registration, self-registration and discovery by type. I'm not aware of them having been written up in a formal pattern description.
Registration: Each class that wants to be discovered is registered somewhere where a framework can find it:
the class name is put in an environment variable or Java system property, in a file, etc.
some code adds the class or its name to a singleton list early in program execution
Self-registration: Each class that wants to be discovered registers itself, probably in a singleton list. The trick is how the class knows when to do that.
The class might have to be explicitly referred to by some code early in the program (e.g. the old way of choosing a JDBC driver).
In Ruby, the code that defines a class can register the class somewhere (or do anything else) when the class is defined. It suffices to require the file that contains the class.
Discovery by type: Each class that wants to be discovered extends or implements a type defined by the framework, or is named in a particular way. Spring autowiring class annotations are another version of this pattern.
There are several ways to find classes that descend from a given type in Java (here's one SO question, here's another) and Ruby. As with self-registration, in languages like those where classes are dynamically loaded, something has to be done to be sure the class is loaded before asking the runtime what classes are available.
One things which I think here is that someone needs to do a new to your newly written Class. If you are not doing that then may be some framework needs to do that.
I could remember something similar which I did in one of my side projects using Java Spring . I had an interface and multiple implementations to it. My project required to iterate over all the implementations do some processing. Now for this even I was looking for some solution with which I would not have to manually do the instantiation of that particular class or some explicit wiring of that particular class. So Spring facilitated me to do that through #Autowired annotation. It injected all the implementation of that interface on the fly. Eg :-
#Autowired
private List<IMyClass> myClassImplementations;
Now in the above example I can simply iterate over the list of implementations injected and I would not have to do instantiation of my new implementation every time I write a new one.
But I think in most of the cases it would be difficult to use this approach (even though it fits in my case). I would rather go with a Factory pattern in general case and try to use that for creating new instances. I know that would require new but in my perception engineering it in a way that its object is automatically created and injected is a bit an extra overhead.
In many case, I need write a lot of class work with CRUD for some class. For example CRUD with pure object User, Book, Tag.
I usually make a directory named models, put all the CRUD classed into the models folder.
But I feel that the word model is not show essence. Is the word model well-defined in computer science? It means the pure object of User, or the means of CRUD of User?
I also use another name services for more complex logic, For example UserService may require other models than UserModel. But the word service is often conflict with some other case like an online service, backend service.
Are there any good names for the model and service in my case? BTW, I am most using Node.js; it may not conflict with the general conventions used in Node.js.
Ultimately, it will come down to what makes the code the most understandable both to you and to someone down the road who may have occasion to work on your code. If 'model' and 'services' convey the thought of what lies within in an obvious way to anyone coming in to the code, then they are probably fine. As far as standards, I don't know if there is a 'defined' set of names you have to use. In MVC, for example, you will use 'Models' as one of your folders in order to store all of the actual models you will be feeding your views, and this is understood in the MVC architecture that those names (Models, Views, Controllers) are the standard.
I agree with you that Model is a little ambiguous. Sometimes it is used to indicate domain objects such as User/Book/Tag, but sometimes it is used to indicate objects that deal with business logic, such as "Buying a book","Authenticating a user".
What's common to both uses is that "Model" is clearly separated from UI, that is handled entirely by the Views and the Controllers.
Another useful name is Entities. In Robert Martin's work on Object Oriented Design, he speaks of use-case-driven design, and distinguishes between three kinds of objects: Entity Objects, Interactor objects and Boundary objects.
Entity objects are useful in multiple use-cases. For example, in a book selling system, entities can be Book/User/Recommendation/Review.
Interactor objects implement use-cases, and they typically use multiple entity objects. For example, Purchase_Book/Login/Search_Books can be such objects.
Boundary objects are used for transferring data across module boundaries, and are used for building interfaces between parts of the system, which should be decoupled from one-another. For example, a controller may need to create a Purchase_Book object, and in order to create it, it needs to pass data about what book ID needs to be purchased, by what user ID, etc... and this data can be packed in a boundary object called Purchase_Request.
While Interactor and Boundary require more explanation, I find that the word Entities is meaningful and can be grasped intuitively without reading any explanation.
For example let's say we have a class called "Secretary" and another class called "Utils"
Utils has some functions that do general stuff, for example finding the maximum of 3 integers.
"Secretary" needs to call some of these functions and in this class these functions are called using the following notation:
Utils.function()
now my question is, what kind of association, if there is any, exists between these two classes?
Most likely Dependency. Associations are normally used to capture some relationship that has meaningful semantics in a domain. So, for example, Secretary 'works for' Manager. Your example is different: you're not capturing meaningful relationships among instances. Therefore Dependency is probably most appropriate.
More importantly though: what are you trying to illustrate? Remember to use UML like any other tool - make it work for you. So, for example, it's fine to show a binary association if (a) it helps you and/or (b) it helps you communicate with other team members. The fact that it doesn't comply with the intended UML usage doesn't matter - as long as you find it useful.
hth.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.