PropsValues in liferay - liferay

why we should not use any classes from portal-impl.jar inside your portlet?
In my case, how can I read PropsValues without adding portal-impl to maven dependencies.
I'm using Liferay 6.2
Thanks

#Origineil - in a comment to your question - gave you the alternative of what to do instead of using portal-impl.jar (e.g. use GetterUtil.getBoolean(PropsUtil.get(PropsKeys.SESSION_TIMEOUT_AUTO_EXTEND)); instead of PropsValues.SESSION_TIMEOUT_AUTO_EXTEND.
Why shouldn't you add portal-impl.jar to your project? Well, there are many reasons. First of all: It doesn't work. If you add portal-impl.jar to your plugin, there are quite a lot of spring components in there that would re-initialize - and they'd assume they're in the portal context. They'll be missing other code they're dependent on and you'd basically pull in a lot of Liferay's implementation and dependency code, making your plugin ridiculously big. And the initialization can't be done twice, so it won't even work anyways.
Plus, in portal-impl.jar you'll only find Liferay's implementation details - none of this code is ever promised to be stable. Not only will nobody care if you're depending on it, it'll most likely break your assumptions even on minor upgrades. Of course some components in there are more stable then others, but the basic assumption is a good one.
Liferay's API (that you're encouraged to use) lives in portal-service.jar. This is automatically available to all plugins and it contains the implementation mentioned above. Don't depend on someone's (Liferay's) internal implementation. Rather depend on the published API. If this means that you'll have to implement something again - so be it. It might be slightly less elegant, but a lot more future proof. And if you compare the size of portal-impl.jar to the amount of code that you'd duplicate in the case of PropsValues, you'll see that this single expansion is actually a nobrainer. Don't pull in 30M of code just because you'd rather type 30 characters than 60.

Related

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Catel application initialization

I'm looking into Catel. I started following along in the Getting Started for WPF Developers. I create the initial project using the template and run it. All well and good.
Then I take a detailed look at the generated source files. I see references to DataWindow, StyleHelper, and ViewModelBase. And I run in the debugger and watch the Catel debug output, stepping so that I can see when things happen.
And it is all magical.
The view manager somehow runs and registers the MainWindow. And the ViewModelFactory is invoked to create MainWindowViewModel, and the MainWindow DataContext gets set.
How does this all happen? I am missing the documentation that puts together for me the sequence of events when an application starts. I am reluctant to take it on faith, and reluctant to dive into the giant code base without an inkling of where to start. I have read the CodeProject articles and the intro part of the documentation.
Is this driven off of the behaviors some way? How are they invoked? I just can't find the thread that starts me on my way.
Aside: I look at Catel because I found myself implementing a ton of plumbing for a significant MVVM application, and decided that someone else had already solved this problem.
Thanks for any leads. (And thanks, Geert. This is a significant work.)
-reilly.
If I understand correctly, you are looking for advanced information of the inner workings. I think this part of the documentation might be of interest for you.
It might not provide all information you are looking for, but it should provide some.
About some basic questions:
1) The startup windows is defined in App.xaml (that's standard WPF)
2) Since it derives from DataWindow, it uses WindowLogic => LogicBase. The LogicBase uses the IViewModelLocator to find the right view model based on naming conventions (all documented)
3) Then the IViewModelFactory will instantiate the vm (using dependency injection) and return it to the logic which will set it as datacontext.
Note that as the advanced documentation tells you, Catel injects an additional layer to make a difference between the outside datacontext and the VM datacontext (of a window or user control content).
ps. I really recommend starting to use the latest prereleases via NuGet. Catel 4.0 (will be released very soon) is nearly feature complete and will prevent you from a lot of breaking changes that you have to go through (and it is of course much better :-))

Check Facelets files integrity during build or deployment

Is there any way to check your Facelets files for errors during the build or deployment process?
I am not looking for a solution that will simply validate the Facelets files against their schema, but also to verify that EL expressions are valid. For example if the name of a property or method is misspelled in a EL expression (eg. Value=”#{controller.nme}” instead of value="#{controller.name}"), this will be discovered only during testing at run time.
I am using JBoss 7.1.
Theoretically Eclipse plugins like WTP and JBosd tools can do this, but as of today those only work in the full IDE, not as a seperate command line tool that can be invoked by Ant or Maven.
Worse, those tools have never been perfect. They always report tons of false positives and as a rule of thumb their validation algorithms are usually years behind. The current version of WTP probably just barely validates everything from Java EE 5 (maybe it still misses some obscure feautures).
As a result, if you fail your build based on this validation you'll probably never be able to deploy anything. Even in the most carefully coded and fully correct web apps, WTP and JBoss tools find it necessary to report hundreds or in large projects thousands of warnings and errors. It's IMHO completely useless to depend upon.
This is a sort of a chicken/egg problem. As you said yourself, many EL expressions can only be evaluated at run time.
Keep in mind that EL includes a whole lot more than simple property and method names, it has different implicit objects (params, facesContext, session, etc) which are available in different contexts, and you can also add your own objects to that through many different ways (other Facelets templates, beans which may or may not be registered in the faces-config and even plain Java code inserting objects in the view).
All these things contribute to make very hard to build tooling with this type of checking for you. I think the closest thing for what you want would be to create your own JSF unit tests for each page using JSFUnit and Arquillian and integrate them to your build. As you are targeting JBoss 7, I think that should be feasible.

adding extra codes at runtime in java

I am developing a library and I need to add extra codes to some of my methods of my objects at run time. there are two points here. first of all, the program I wanted to add extra code, is written before by some body else, and I don't wanted to edit it. second, my work is very similar to adding aspect before calling a method.
After searching and reading on in internet, I found out many frameworks like aspectj, ASPECTWERKZ and etc. that could do this job, but for example the problem with aspectj(when using in spring context) is that it doesn't provide you any API to do your weaving at runtime.
I also found out there are some libraries like ASM and javassist and etc. but they are so general and hard to learn, and my job is more likely to aspects.
So what do you suggest? is there any good library out there? Am I wrong about above libraries i mentioned earlier? Please Help!
With AspectJ you can apply your aspects when classes are loaded at runtime. See Load-Time Weaving documentation. Alternatively, you don't have to change your old code and can apply aspects at compile time.

Security of scala runtime

I'm developer of Robocode engine. We would like to make Robocode
multilingual and Scala seems to be good match. We have Scala plugin prototype here.
The problem:
Because users are creative programmers, they may try to win battle
different ways. As well robots are downloaded from online database
where anyone could upload one. So gap in security may lead to security
hole into users computer. Robots written in Java are running in
restricted sandbox. Almost everything is prohibited [network, GUI,
disk (limited), threads (limited), classloaders and reflection]. The
sandbox is similar to browser applet. We use SecurityManager, custom
ClassLoader per robot, etc ...
There are two ways how to host Scala runtime in Robocode:
1) load it together with robot inside of sandbox. Pretty safe for us,
preferred solution. But it will damage Scala runtime abilities because runtime uses reflection. Maybe generates classes at runtime ? Use threads to do some internal cleanup ? Access to JVM/internals ? (I would not like to limit abilities of language)
2) use Scala runtime as trusted code, outside the box, security on
same level as JDK. Visibility to (malicious)
robot. Are the Scala runtime APIs safe ? Do methods they have security
guards ? Is there any safe mode ? Is there any singleton in Scala runtime,
which could be abused to communicate between robots ? Any concurency/threadpool/messaging which could simulate threads ? (Is there any security audit for Scala runtime?)
3) Something in between, some classes of runtime in and some out. Which classes/packages must be visible to robot/which are just private implementation ? (this seems to be future solution)
The question:
Is it possible to enumerate and isolate the parts of runtime which must run in
trusted scope from the rest ? Specific packages and classes ? Or better idea ?
I'm looking for specific answer, which will lead to secure solution. Random thoughts welcome, but not awarded. There is ongoing discussion at scala email group. No specific answer yet.
I think #1 is your best bet and even that is a moving target. As brought up on the mailing list, structural types use reflection. I don't think structural types are common in the standard library, but I don't think anyone keeps track of where they are.
There's also always the possibility that there are other features using reflection behind the scenes. For example, for a while in the 2.8 branch some array functionality was using reflection. I think that's been changed after benchmarking, but there's always the possibility that there's some problem where someone said "Aha! I will use reflection to solve this."
The Scala standard library is filled with singletons. Most of them are immutable, but I know that the Scheduler object in the actors library could be abused for communication because it is essentially a proxy for an actual scheduler so you can plug your own custom scheduler into it.
At this time I don't think Scala requires using a custom class loader and all of its classes are produced at compile time instead of runtime, but then again that's probably a moving target. Scala generates a lot of class files, and there is always talk of making it generate some of them at runtime when they are needed instead of at compile time.
So, in short, I do not think it's possible (within reasonable constraints on effort) to enumerate and isolate the pieces of Scala that can (and should) be trusted.
As you mentioned other J* language implementations which all may make use of reflections, it would be a ban for all those languages as long as reflection is not part of the game.
I guess that would be JVM's problem not to have a way to partition the scope of reflection API, such that you could sort of "sandbox" the part of code that could be reflected within.

Resources