Role of portal-kernel and portal-impl - liferay

In Liferay 7, what is the conceptual difference between portal-kernel and portal-impl?
From the name only, it would sound like kernel is a kind of foundation, and impl builds around it with UI/etc. But actually kernel also contains UI code and code that looks rather annex.
Both have interfaces, implementations and unit tests.
portal-kernel has 500 directories, 4766 files.
portal-impl has 947 directories, 4252 files
Where is the intended line between portal-kernel and portal-impl?

portal-kernel has public interfaces and implementations that you typically need when interfacing with Liferay. These can be util-classes or just service interfaces.
portal-impl is considered implementation detail - you shouldn't depend on it and it's not intended for anybody to change. If you decide you really have to change anything in portal-impl, then there's no assumption that the implementation stays stable even with the slightest edit in the next release. Anything goes, no stability promise is given.

Related

PropsValues in liferay

why we should not use any classes from portal-impl.jar inside your portlet?
In my case, how can I read PropsValues without adding portal-impl to maven dependencies.
I'm using Liferay 6.2
Thanks
#Origineil - in a comment to your question - gave you the alternative of what to do instead of using portal-impl.jar (e.g. use GetterUtil.getBoolean(PropsUtil.get(PropsKeys.SESSION_TIMEOUT_AUTO_EXTEND)); instead of PropsValues.SESSION_TIMEOUT_AUTO_EXTEND.
Why shouldn't you add portal-impl.jar to your project? Well, there are many reasons. First of all: It doesn't work. If you add portal-impl.jar to your plugin, there are quite a lot of spring components in there that would re-initialize - and they'd assume they're in the portal context. They'll be missing other code they're dependent on and you'd basically pull in a lot of Liferay's implementation and dependency code, making your plugin ridiculously big. And the initialization can't be done twice, so it won't even work anyways.
Plus, in portal-impl.jar you'll only find Liferay's implementation details - none of this code is ever promised to be stable. Not only will nobody care if you're depending on it, it'll most likely break your assumptions even on minor upgrades. Of course some components in there are more stable then others, but the basic assumption is a good one.
Liferay's API (that you're encouraged to use) lives in portal-service.jar. This is automatically available to all plugins and it contains the implementation mentioned above. Don't depend on someone's (Liferay's) internal implementation. Rather depend on the published API. If this means that you'll have to implement something again - so be it. It might be slightly less elegant, but a lot more future proof. And if you compare the size of portal-impl.jar to the amount of code that you'd duplicate in the case of PropsValues, you'll see that this single expansion is actually a nobrainer. Don't pull in 30M of code just because you'd rather type 30 characters than 60.

Is there any advantage in building a business application with an Entity-Component System?

I understand the appeal of using the data-driven Entity-Component System for game development. Naturally I am trying to find other areas to apply this paradigm. As I am about to embark on developing a small business application, I've been wondering how well Entity-Component would fit in with it. However I cannot find any examples or discussions on using Entity-Component in anything besides games. Is there a reason? Would there be any advantages in using Entity-Component in software besides games?
I ended up taking a risk and trying to use ECS outside of the gaming domain (now as an indie, formerly company employee) and with results that astounded me. I wouldn't do things any other way now and have an easier-to-maintain system than ever before (not perfect, but so much better than the COM-style architectures we used to use in my industry). I took the plunge mainly because it seemed to provide answers for all the things me and my team in the past were struggling with using a COM architecture, though I imagined with such a risky move that I might just end up exchanging one set of problems for another (was willing to take the risk now that I was on my own). Turned out I didn't exchange one can of worms for another. ECS solved practically all those problems while barely introducing any new ones.
That said, I'm in the VFX domain and it's not that different from games. We still have to animate things like characters, emit particles, interact with meshes, textures, play sound clips, render the result, allow people to write plugins, scripts, etc.
To try to apply ECS in a business domain is far more ballsy. That said, I imagine it could really help create a maintainable system if you have relatively few systems processing a huge number of entity combinations.
Maintainability
What I found that made ECS so much easier for me to maintain compared to previous object-oriented approaches, and even within my personal projects, was that the previous approaches often transferred the maintenance overhead away from the clients using the classes to the classes themselves. However, there would be dozens of interfaces, hundreds of subclasses, all inheriting different things and implementing different interfaces to maintain individually. Testing also becomes difficult with so many granular classes and the need to do mock testing.
My brain can only handle so much and hundreds of subclasses interacting with each other was far beyond the limit. Very quickly I found myself no longer able to reason about what was going on, let alone when or where, overwhelmed by complex interactions leading to complex side effects, and never so confident that I could sandwich new code somewhere in there without causing unwanted side effects.
The computing scientist’s main challenge is not to get confused by the
complexities of his own making. -- E. W. Dijkstra
This applied even for projects I exclusively authored myself. There came a breaking point, typically after a few hundred thousand LOC or so, where I could no longer even comprehend my own creation. I'd refactor here and there, pick up a little momentum, only to take a vacation, come back, and be lost all over again.
ECS removed that challenge, and I don't mean to the degree that I can take a 2-week vacation, come back to the codebase, look at some code, and get the vision of crystal clarity that I had when I was writing it in the first place. ECS doesn't improve things that much in this regard and it still takes some time for me to reacquaint myself with code I haven't looked at in a good while. The reason ECS helped so much is that I didn't need to recall everything I wrote in order to extend and change the software. The systems are so decoupled from each other that it's not a huge deal if I forgot how one works exactly. I can just concentrate on what I need to do and not have to worry about complex interactions of side effects being triggered through complex interactions of control flow. I can just focus on what I need to do and not have to think much about anything else.
This applies even when introducing brand new core-level features integrated into the product. These days when I introduce a new central feature to the product, like a brand new audio system central to the product, the only thing I have to think much about is how to integrate it into the user interface. Integrating it into the architecture is relatively effortless compared to previous architectures I worked in.
Meanwhile with the ECS, I only have to maintain a couple dozen systems to provide no less functionality than the above. They do have some complex logic inside, but I don't have to maintain the hundreds of different entity combinations there are, since they just store components, and I don't have to maintain component types since they just store raw data and I rarely ever find the need to go back and change them (very close to never).
Extensibility
Being able to extend an ECS architecture in hindsight with central concepts is about the easiest thing I've encountered so far and requires the minimum amount of knowledge of how the existing codebase works.
As a very fresh example, I recently encountered a strong desire for scripters using my software to be able to access entities in the scene using a simple, global name. Before they had to specify a full scene path like, Scene.Lights.World.Sunlight as opposed to simply, Sunlight.
Normally in the previous architectures I worked in, that would have ranged from a highly intrusive to moderately intrusive change. A COM-style system revolving around pure interfaces might require introducing a new interface or, worse, changing an existing one, and updating a few hundred subtypes to implement the new functions. If we had a central abstract base class that everything already inherited, we might be able to modify that one centrally to implement this new interface (or the new parts of an existing interface), but it would likely be monstrous if there was a central base class for everything that might want such a name, and require wading through a lot of delicate code.
With the ECS, all I had to do was introduce a new component, GlobalName, with a system that processes GlobalName components and can find an entity quickly through a specified name. It also handles making sure that no two GlobalName components have a matching name. Due to the nature of the ECS, it's also very easy to pick up when this GlobalName component is destroyed as a result of an entity being destroyed or the component being removed from it to keep the data structure used to accelerate searches by name (a trie) in sync.
After that I was just able to attach this GlobalName component to anything that scripters wanted to refer to by a global name. They can also attach it themselves and then refer to a given entity later through that name. Components also serialize themselves in ways that preserves backwards compatibility for the most part (ex: previous versions of the software which did not know what GlobalName was will simply ignore it upon loading scene data referring to it).
It was about as painless and as non-intrusive of a change as I could change imagine considering that this was added very late in hindsight to a 4-year old software which did not anticipate the need for this whatsoever. And I managed to get it working just fine on the very first try. As a bonus, all the non-trivial code newly added to make this work lives isolated in its own space; it's not jumbled up with anything else and doesn't contribute to the complexity of anything else as would inevitably have to be the case if I used abstract interfaces or base classes. I did not have to modify anything central to make this work except a few lines of trivial script and some trivial GUI code to display these global names when available.
"Inherit Anywhere"
Have you ever wished you could extend a class's functionality from anywhere in your code without actually modifying its code? For example:
// In some part of the system exists a complex beast of a class
// which is tricky modify:
class Foo {...};
// In some other part of the system is a simple class that offers
// new behavior we'd like to have in 'Foo', with abstract functionality
// (virtual functions, i.e.) open to substitution:
class Bar {...};
// In some totally different part of the system, maybe even a script,
// make Foo inherit Bar's behavior on the fly, including its default
// constructor, copy constructor, and destructor behavior for Bar's state.
Foo.inherit(Bar);
The above leaves the question: where will the abstract functionality of Bar be implemented, since Foo doesn't provide such an
implementation? That's where systems analogically kick in for ECS.
I think the temptation will be there for most of us who had to wade through some existing class's complex code to just make it do a few new things while risking causing unwanted side effects/glitches/toe-stepping, or we might have even faced a temptation for a third party library outside of our control to just offer a little bit more functionality that we'd find very useful throughout the code using this third party library if it just provided "this one thing", or we might just hate the idea of having to change our colleagues' existing code (don't want to step on toes) even though we're tasked to provide new central behavior.
ECS offers you that kind of flexibility although in a very different way from the above example (but gives you the analogical benefits). It allows you to extend anything's behavior/functionality/state from anywhere. As in the above example of extensibility, I did not have to modify anything that exists to provide that global name searching functionality and state. I can extend the behavior of these entities from the outside, even from script, by just adding a new type of component to any entity I want at which point any systems I write interested in such components will then be able to pick up and process using a duck typing approach ("If it has a GlobalName component, it can be provided a global name which can be utilized to find a matching component very quickly").
Associating Data
Similar to the above, have you ever faced a temptation to associate data to existing objects in the code? In such cases we might have to maintain parallel arrays or associative containers like dictionaries/maps, and such code can be tricky to write correctly given that it has to stay in sync as new objects are added and removed.
ECS solves that problem at a central level, since now you can just attach components and remove components to/from any entity you want very efficiently. That becomes your means of associating new data on the fly. You no longer have to manually synchronize associative data structures.
Testing
Another issue for me just personally, and it may be because I never mastered the art of unit testing (though I did work with a colleague who really studied up on the subject), is that it never made me confident that a system was relatively bug-free. Integration tests gave me greater confidence in that regard. The problem for me was this: even if the unit test passes, how do you know the client will not misuse the interface? What if they use it at the wrong time? What if they try to use it from multiple threads when it's deliberately not designed to be thread-safe?
I get no huge sense of relief to see unit tests passing, since most of the bugs encountered had to do with what was going on between the interfaces being tested, and we had many incoming in spite of all of the hundreds of unit tests we wrote passing. I love test-driven development, and I did find value in the unit test of telling me that this one unit was doing what it was supposed to do which allowed me to use it more confidently throughout the codebase, but the unit testing never gave me a huge sense of relief about the correctness of the codebase as a whole.
ECS solved that problem for me and made unit testing much more valuable even to someone like me who never mastered the art of testing since there are a handful of systems, they each do their hefty share of work (not granular little objects), and they're concrete. If we have to do anything resembling mock testing, it's simply to insert the components/entities necessary to run them and test them. It starts to feel like testing a system is closer to integration testing than unit testing, even though the system is the smallest testable unit.
Homogeneous Processing
To apply ECS requires embracing a more loopy kind of logic with more homogeneous loops doing one thing at a time. A lot of OOP tends to encourage non-homogeneous control flows and complex interactions causing many things to happen in any given phase/state of the system. This was the most difficult part I found initially since I wanted to apply disparate tasks at one time to a given entity/set of components, and my temptation couldn't be satisfied so directly given decoupled systems which only perform one task at a time. So I had to learn how to defer processing, storing some state for the next system to use, and I also use (to a minimum) an event queue so that systems can trigger events which get processed by others.
Nevertheless, I found ways to program the equivalent of a complex interaction as a result of a series of simple loops doing one thing at a time. It never turned out to be as difficult as I imagined to force myself to work this way, applying one uniform task over one set of entities at one time. And after being forced to do this for a while and maintaining the results -- wow! I should have been doing that all along. It's actually kind of depressing reflecting back on a decade of maintaining architectures that were so much harder to maintain than they needed to be after getting the breath of fresh air that was the ECS architecture.
Interactions
This is a simplified "interaction" diagram (not necessarily indicative of direct coupling, as the coupling version would be from concrete objects to abstract interfaces) comparing the differences before and after I adopted ECS. Here's before:
Except that's just between a small number of types (I was too lazy to draw hundreds). And this was why I always struggled to maintain these things and felt tangled up in the code. It's because the interactions between the code were actually a tangled mess, leading you to all sorts of remote functions in the system causing side effects along the way. After (and now the components are just raw data, they contain no functionality of their own):
And the second version was so, so much easier to comprehend, so much easier to extend, so much easier to maintain, so much easier to reason about in terms of correctness, so much easier to test, etc. If your business architecture can effectively fit into the second type of model, I can't overstate how much it can simplify everything.
Invariants
One of the scariest parts to me when I started developing the ECS engine was the lack of information hiding. When components are just raw data, they're dangling what I thought should be their privates in the air for anyone to touch. This could be doubly scary in a business domain that might be more mission-critical in nature.
Yet I found invariants just as easy to maintain, if not more, due to the limited number of systems that access any given component (and typically if the data is modified, it only makes sense for one system in the entire codebase to do it), the extremely simple control flows, and the extremely predictable side effects that result. And it's pretty easy to test the codebase for correctness when you just have a handful of systems to worry about as far as functionality.
Conclusion
So if you are willing to take the risk, I think it could potentially be applied very effectively in certain business domains. The main thing I think is worth thinking about upfront first is if you can model the entirety of your software's needs as a handful of systems processing data stored in components, with each system still performing a bulky but singular responsibility (the analogical equivalents of a RenderingSystem, GuiSystem, PhysicsSystem, InputSystem, etc). Naturally the benefits of ECS diminish if you find you need hundreds of disparate systems to capture the business logic.
If you're interested, I can extend my answer in some later iterations and try to go over some of the minor struggles I faced with the ECS when I was completely wet behind the ears about it.
(Apologies for the necromancy)
Coming from an enterprise background, I have recently been considering this question. Entity-component systems are comparatively new, and represent a completely different design paradigm to what most business developers will have experience with.
Considering my own company's example, I have seen a few scenarios where an entity-component system would offer benefits.
For example, in our primary application, addresses are associated with contacts and organisations. (There are ContactAddress and OrganisationAddress joining tables in our database.) One client wishes to associate projects with addresses as well. There are many ways of achieving this, but an entity-component based approach would seem quite elegant to me - simply add an Addressable component to the Project entity, and the GUI should sort itself out.
Instead, we will likely be adding a new joining table and new data-input pages (albeit re-using common controls).
The primary disadvantage, I would think, would be (initial) lack of developer awareness of the best ways of applying this paradigm to business software, precisely because it doesn't appear to have been done before. Once you start with such an approach, you are committed to it - if it proves frustrating once your project reaches a certain complexity, there's no way out without a significant rewrite.

Setting up Perforce depot for multiple projects

Summary: Want help to figure out how to setup the depot and my development environment so that I can support multiple, related projects.
Details:
Until now I've had a depot which had in it only one project - ProjectA - robot version A.
I am starting to work on a new version (ProjectB) which has some differences in HW - I/O port mappings and timers have changed. I would like to continue to develop code for both projects.
This means that ProjectB will share some files with the ProjectA and some files will be different.
Since the differences are HW related items what I'm thinking of doing is creating a common area and then project specific areas where the common area is for device independent code and project specific area is for device dependent code.
The differences are big enough that I don't want to do #ifdef within files. Some differences are simple - different I/O port mapping and some are completely new modules.
To make maintainance easier, I would like to be able to compare differences between device dependent code and propagate selected changes.
Finally, to minimize my burden during comparisons, I would like to mark differences that I know are okay so that in future comparisons they don't show up.
Help!
Your instincts are good -- you're trying to Not Duplicate Code. This is the core of good design & engineering.
As for the file layout, it's always annoying to have your directories too deep, but that's MUCH better than too shallow. Maybe:
<root>
main/
projects/
robot1/...
robot2/...
shared1/
shared2/
(Big repositories are much deeper than that, even.)
As for how you make shared code -- you could have different setup.h or constants.h that drive what the various shared libraries do. Alternatively, build your shared libraries so they are parameterized at runtime.
SetupDrivers(0x80020); // address of PIO registers
And lastly -- if the projects really are different, decide if sharing the code really is the right thing. Usually yes, but everything is a choice. If you hope to manually "diff" your files to look for differences, it's really up to you to keep the structures close enough to diff. The "different config.h file for each project" idea mentioned above would help.
If you roll your own diff tool (in python or whatever) you could use special comments to flag "expected different lines".

How to build Linux system from kernel to UI layer

I have been looking into MeeGo, maemo, Android architecture.
They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...].
Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries.
I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)]
How can i build my own customize system from kernel to UI.
I apologize in advance for a very long winded answer to what you thought would be a very simple question. Unfortunately, piecing together an entire operating system from many different bits in a coherent and unified manner is not exactly a trivial task. I'm currently working on my own Xen based distribution, I'll share my experience thus far (beyond Linux From Scratch):
1 - Decide on a scope and stick to it
If you have any hope of actually completing this project, you need write an explanation of what your new OS will be and do once its completed in a single paragraph. Print that out and tape it to your wall, directly in front of you. Read it, chant it, practice saying it backwards and whatever else may help you to keep it directly in front of any urge to succumb to feature creep.
2 - Decide on a package manager
This may be the single most important decision that you will make. You need to decide how you will maintain your operating system in regards to updates and new releases, even if you are the only subscriber. Anyone, including you who uses the new OS will surely find a need to install something that was not included in the base distribution. Even if you are pushing out an OS to power a kiosk, its critical for all deployments to keep themselves up to date in a sane and consistent manner.
I ended up going with apt-rpm because it offered the flexibility of the popular .rpm package format while leveraging apt's known sanity when it comes to dependencies. You may prefer using yum, apt with .deb packages, slackware style .tgz packages or your own format.
Decide on this quickly, because its going to dictate how you structure your build. Keep track of dependencies in each component so that its easy to roll packages later.
3 - Re-read your scope then configure your kernel
Avoid the kitchen sink syndrome when making a kernel. Look at what you want to accomplish and then decide what the kernel has to support. You will probably want full gadget support, compatibility with file systems from other popular operating systems, security hooks appropriate for people who do a lot of browsing, etc. You don't need to support crazy RAID configurations, advanced netfilter targets and minixfs, but wifi better work. You don't need 10GBE or infiniband support. Go through the kernel configuration carefully. If you can't justify including a module by its potential use, don't check it.
Avoid pulling in out of tree patches unless you absolutely need them. From time to time, people come up with new scheduling algorithms, experimental file systems, etc. It is very, very difficult to maintain a kernel that consumes from anything else but mainline.
There are exceptions, of course. If going out of tree is the only way to meet one of your goals stated in your scope. Just remain conscious of how much additional work you'll be making for yourself in the future.
4 - Re-read your scope then select your base userland
At the very minimum, you'll need a shell, the core utilities and an editor that works without an window manager. Paying attention to dependencies will tell you that you also need a C library and whatever else is needed to make the base commands work. As Eli answered, Linux From Scratch is a good resource to check. I also strongly suggest looking at the LSB (Linux standard base), this is a specification that lists common packages and components that are 'expected' to be included with any distribution. Don't follow the LSB as a standard, compare its suggestions against your scope. If the purpose of your OS does not necessitate inclusion of something and nothing you install will depend on it, don't include it.
5 - Re-read your scope and decide on a window system
Again, referring to the everything including the kitchen sink syndrome, try and resist the urge to just slap a stock install of KDE or GNOME on top of your base OS and call it done. Another common pitfall is to install a full blown version of either and work backwards by removing things that aren't needed. For the sake of sane dependencies, its really better to work on this from bottom up rather than top down.
Decide quickly on the UI toolkit that your distribution is going to favor and get it (with supporting libraries) in place. Define consistency in UIs quickly and stick to it. Nothing is more annoying than having 10 windows open that behave completely differently as far as controls go. When I see this, I diagnose the OS with multiple personality disorder and want to medicate its developer. There was just an uproar regarding Ubuntu moving window controls around, and they were doing it consistently .. the inconsistency was the behavior changing between versions. People get very upset if they can't immediately find a button or have to increase their mouse mileage.
6 - Re-read your scope and pick your applications
Avoid kitchen sink syndrome here as well. Choose your applications not only based on your scope and their popularity, but how easy they will be for you to maintain. Its very likely that you will be applying your own patches to them (even simple ones like messengers updating a blinking light on the toolbar).
Its important to keep every architecture that you want to support in mind as you select what you want to include. For instance, if Valgrind is your best friend, be aware that you won't be able to use it to debug issues on certain ARM platforms.
Pretend you are a company and will be an employee there. Does your company pass the Joel test? Consider a continuous integration system like Hudson, as well. It will save you lots of hair pulling as you progress.
As you begin unifying all of these components, you'll naturally be establishing your own SDK. Document it as you go, avoid breaking it on a whim (refer to your scope, always). Its perfectly acceptable to just let linux be linux, which turns your SDK more into formal guidelines than anything else.
In my case, I'm rather fortunate to be working on something that is designed strictly as a server OS. I don't have to deal with desktop caveats and I don't envy anyone who does.
7 - Additional suggestions
These are in random order, but noting them might save you some time:
Maintain patch sets to every line of upstream code that you modify, in numbered sequence. An example might be 00-make-bash-clairvoyant.patch, this allows you to maintain patches instead of entire forked repositories of upstream code. You'll thank yourself for this later.
If a component has a testing suite, make sure you add tests for anything that you introduce. Its easy to just say "great, it works!" and leave it at that, keep in mind that you'll likely be adding even more later, which may break what you added previously.
Use whatever version control system is in use by the authors when pulling in upstream code. This makes merging of new code much, much simpler and shaves hours off of re-basing your patches.
Even if you think upstream authors won't be interested in your changes, at least alert them to the fact that they exist. Coordination is essential, even if you simply learn that a feature you just put in is already in planning and will be implemented differently in the future.
You may be convinced that you will be the only person to ever use your OS. Design it as though millions will use it, you never know. This kind of thinking helps avoid kludges.
Don't pull upstream alpha code, no matter what the temptation may be. Red Hat tried that, it did not work out well. Stick to stable releases unless you are pulling in bug fixes. Major bug fixes usually result in upstream releases, so make sure you watch and coordinate.
Remember that it's supposed to be fun.
Finally, realize that rolling an entire from-scratch distribution is exponentially more complex than forking an existing distribution and simply adding whatever you feel that it lacks. You need to reward yourself often by booting your OS and actually using it productively. If you get too frustrated, consistently confused or find yourself putting off work on it, consider making a lightweight fork of Debian or Ubuntu. You can then go back and duplicate it entirely from scratch. Its no different than prototyping an application in a simpler / rapid language first before writing it for real in something more difficult. If you want to go this route (first), gNewSense offers utilities to fork your own OS directly from Ubuntu. Note, by default, their utilities will strip any non free bits (including binary kernel blobs) from the resulting distro.
I strongly suggest going the completely from scratch route (first) because the experience that you will gain is far greater than making yet another fork. However, its also important that you actually complete your project. Best is subjective, do what works for you.
Good luck on your project, see you on distrowatch.
Check out Linux From Scratch:
Linux From Scratch (LFS) is a project
that provides you with step-by-step
instructions for building your own
customized Linux system entirely from
source.
Use Gentoo Linux. It is a compile from source distribution, very customizable. I like it a lot.

What is your definition of a 'Component' in a software Product?

We are working on standardizing our Bugzilla application across all projects. The default installation has bugs grouped into Products which have Components and Versions.
We are united in our definitions of Product and Version but there is still some confusion/discussion/argument about what constitutes a Component.
How would you define a Component as far as it relates to classifying bugs in a project?
For bug tracking purposes, I would use Component to represent high-level areas that different developers tend to handle. For me, Component [in bug tracking] == Area of Concern.
For example, for a fictional EventPlanner application, I would list my components as:
Calendar
User Interface
Printing
Note that this may be different than what I, as a developer, would typically consider a software component. For example, my EventPlanner app might have use Calendar API, but "User Interface" and "Printing" themselves are not software components.
A component is a subdivision of a product and it provides a subset of the functionality of the product.
For example, if Stack Overflow is the product here are some potential components:
Questions
Tags
Badges
Profiles
A bit of glue logic should piece the components together to form the project.
I would define Component similar to referencing a branch of code like a project within Visual Studio which could be a class library, console application, windows application, or website/web application.
A good definition that will serve your purpose.
Anything that's a "helper" object, something that does something small to support the larger functionality of the application, and can be reused in other Products.
For example, an EmailSender class, or an ErrorLogger, etc.
Whatever it is you're most interested in breaking down your Products into for purposes of bug tracking. It's more important that the distinction is useful to you than it be "correct" according to some concept that's not specific to the problem of bug tracking.
To me, any part of the code for which you can change the implementation without breaking other parts could qualify as a component. Something you can definitively point to and say, "This bug is there, and nowhere else."
If your product is a giant spaghetti code ball, you might not get any benefit from components. If you've decoupled very well, and you have a very large project, you might have hundreds of things that could qualify as "components", and you'll probably have to semantically group them to prevent the distinction from being more trouble than it's worth.
EDIT: In response to a comment on another answer, this model works best if the person filing the bugs has the necessary knowledge of the problem to do the filing. We always review (triage) reported bugs and file them ourselves. If you're leaving component selection up to the users, this idea won't work very well for you.
Dr. A. Richard Newton from the University of Berkeley who teaches a course called "Component-Based SoftwareState-of-the-Art and Lessons Learned" defines a software component as
A software component is a unit of
composition with contractually
specified interfaces and explicit
context dependencies only. A software
component can be deployed
independently and is subject to
composition by third parties
I dont think there is a better definition. So bug here could be at component level or at interface level i.e. composition level.

Resources