NoAutoGeneratedIdViewHandler and production? - jsf

The OmniFaces 2.0 introduced a NoAutoGeneratedIdViewHandler. This is a great feature but surely it should be off for production mode?
Even after reading the docs and source I am not sure if it's development mode only or not. Crossing my fingers :-)

It is indeed primarily designed as a development aid.
If you already use it from the beginning on when developing a new JSF webapp, or when it's installed only later and you have tests which cover every single page of the webapp, then it shouldn't harm when run in production as well (i.e. it shouldn't throw the ISE).
I can however imagine that it's an unnecessary overhead to keep checking all component IDs in production stage as they are basically already checked during development stage and don't ever change in production stage. Hence, it has been altered to run in development stage only. It's available as per today's latest snapshot. Documentation will be regenerated later.

Related

Azure Functions runtime version on Azure to reduce error

What is the best practice for using runtime version given that the runtime introduced breaking changes, and using specific version will cause issue given that runtime will be removed regularly.
https://github.com/Azure/app-service-announcements-discussions/issues/90
Please let me explain below:
Scenario 1:
When below is used,
FUNCTIONS_EXTENSION_VERSION =~2
Our code broke with the latest runtime. Because ~2 means it uses the latest version.
https://github.com/Azure/azure-functions-host/issues/4203
Scenario 2:
However, when below is used,
FUNCTIONS_EXTENSION_VERSION = specific version
Our code brok again with the latest runtime. Because the specified runtime is removed by Azure Functions, and the latest runtime with breaking changes is used instead,
https://github.com/Azure/app-service-announcements-discussions/issues/90
Again, what is the better way to reduce the error?
Updates
In terms of time frame, how a latest runtime works when it is publicly downloadable and it is rolled out on Azure Functions? For example, how advance is the runtime available before it is rolled out to Azure fucntions?
How long will an old runtime be kept for on Azure Functions after latest runtime rollout? Based on what factors are an old runtime decided to be deteled?
The best and recommended practice is to use the latest. It is a rare occurrence, but unfortunately, a regression was introduced with a new release impacting your app.
If you want to perform validation on new versions, the recommendation is to:
Subscribe to new release notifications at
https://github.com/Azure/app-service-announcements/issues
Pin yourself to the current release you've validated against
As a new version is introduced, update a test environment to adopt that new version (or have a test environment that auto updates, using ~2). If you have a test environment set with auto-updates and automated tests, this makes the process significantly simpler.
Once validated, update the production environment to that new version
If you find an issue, reporting allows us to ensure we don't remove
the version that works.
We always maintain the newly deployed version and the previous release, and, aside from hotfixes and small ad-hoc deployments, the release cadence is ~2 weeks. Anything that has been flagged as a version that needs to be kept due to issues intruduced by a release (forcing customers to pin) is also kept.

Is updating npm dependencies not recommended on a production application?

I recently started exploring npm and installed a github repo yoonic/nicistore.
But when I try to build it fails.
My question is if I start building things on top of node, which I see has tooo many dependencies from different vendors, Am I completely on the mercy of the respective package developers?
I have seen that most node based github repos fail to build in the first try. If I update one of the modules by running a console command, Is it likely to break all the application?
And if It is does, doesn't it prove node.js an unreliable and unstable development platform?
Think of it as the opposite of most other languages.
You are writing an app in Java.
You want to use LibA, LibB and LibC.
So you try to use LibC 2.4, and as soon as you do, your manager throws all kinds of errors at you.
Why?
Because LibB is using LibC 1.9
So now what are your options?
Strip out all of the calls to all of the new API for LibC that you wanted to use...
...or hope that LibB is open-source, and you can contribute an update for a new version of it, so that you can use the latest version of LibC (and hope it doesn't update).
So now you've done that... but now you've broken your LibA, because it wants the old LibB.
You didn't even want LibA, you just had to have it for your app to be happy with your framework, and the libs that you did actually want to use (B and C). LibA is closed-source, and isn't maintained, anymore. Tough luck. Go back to your old ways, and forget about how much better life could be, if you could only use your framework with the new version of LibC. Or start praying that your framework does a major rewrite, to get rid of the LibA dependency... but then figure out what new hell you have to deal with, just to get LibC working.
Is this really better than Node?
What node allows you to do is install dependencies which are at different versions than the same library that your dependencies are using.
Not that you can't do that with Java, too... but the entire community has decided that it's just not ever going to try to do that, and thus outlaw it at a tool level.
Next, you see too many things which leave you at the mercy of too many vendors...
Going back to Java (or C++, or nearly any mainstream language), looking at Java, itself, how many libraries are made by Sun Microsystems, or by James Gosling?
Moreover, if you want to boil it down, to suggest using only, say, one huge, overarching framework (like Spring MVC) and using no other libraries of any kind (like JodaTime), then how many libraries does Spring itself lean on, and why are they of no concern to you, even if you're just using the compiled VM bytecode?
In fact, a strong argument could be made to be more wary of compiled binaries, in languages where it was traditional to see strong, copy-left licensing like that of the GNU GPL... in that realm, you open yourself up to craziness.
Most of the Node stuff, by comparison, is dirt-simple freeware. And even if it's not, it's quickly replaceable as most are micro-libraries.
I would suggest that updating a Node package your server depends on, via CLI is less hazardous than doing the same to a full-fledged Java project, if your goal is to see your project compile again, some time in the next week, but with the newer fixes/features...
...but if you're talking about a full-scale, production application, you also want to be cognizant of what it is you're doing, with regards to your codebase, regardless.
As to why things don't build for you on the first try, assuming that you're on a non-Windows platform, and your environment is up to date, I don't know.
Most C/C++ projects I clone don't build for me, first try, either. I usually forget something, or there was something poorly documented, or the actual project was set up to make unfair assumptions about the system it would operate in.
Does that mean that C++ is an unreliable/unstable development platform?
Or the hours/days spent on getting Eclipse set up in an enterprise environment, with all kinds of crazy, company-specific projects and project settings?
It sounds like a case of bad design, more than anything.
Then again, most of my projects these days are wrapped in Docker containers. They all run in the same environment, whether they're running in Windows, on a Mac, or on the server. That tends to take the sting out of building projects, regardless of what language the code is in, or what VM / processor they're running on.
You should also be using NPM shrinkwrap files, or Yarn Lockfiles to preserve the build configuration, with the known-working versions of libraries. And you should have unit and integration tests to ensure that changing library versions has no discernible impact on your system.

Testing elixir release build with exrm

I am building phoenix application with exrm.
Good practice suggests, that I should make tests against the same binary, I'll be pushing to production.
Exrm gives me the ability to deploy phoenix on machines, that don't have Erlang or Elixir installed, which makes pulling docker images faster.
Is there a way to start mix test against binary built by exrm?
It should be noted that releases aren't a binary file. Sure they are packaged into a tarball, but that is just to ease deployment, what it contains is effectively the binary .beam files generated with MIX_ENV=prod mix compile, plus ERTS (if you are bundling it), Erlang/Elixir .beam files, and the boot scripts/config files for starting the application, etc.
So in short your code will behave identically in a release as it would when running with MIX_ENV=prod (assuming you ran MIX_ENV=prod mix release). The only practical difference is whether or not you've correctly configured your application for being packaged in a release, and testing this boils down to doing a test deployment to /tmp/<app> and booting it to make sure you didn't forget to add dependencies to applications in mix.exs.
The other element you'd need to test is if you are doing hot upgrades/downgrades with your application, in which case you need to do test deploys locally to make sure the upgrade/downgrade is applied as expected, since exrm generates default .appup files for you, which may not always do the correct thing, or everything you need them to do, in which case you need to edit them as appropriate. I do this by deploying to /tmp/<app> starting up the old version, then deploying the upgrade tarball to /tmp/<app>/releases/<new version>/<app>.tar.gz, and running /tmp/<app>/bin/<app> upgrade <version> and testing that the application was upgraded as expected, then running the downgrade command for the previous version to see if it rolls back properly. The nature of the testing varies depending on the code changes you've made, but that's the gist of it.
Hopefully that helps answer your question!

subversion upgrade 1.6 -> 1.7 hooks infrastructure incompatibility

I'm going to upgrade my company's subversion server from version 1.6 to 1.7. The server runs on linux (Ubuntu AFAIK).
I've read all those:
Subversion 1.7 release notes
I've also read those posts:
subversion-client-version-confusion
how-to-upgrade-svn-server-from-1-6-to-1-7
Here and now, I know how to perform this. It's not a big deal. What concerns me the most is the current hooks infrastructure. There are several scripts in bash and perl.
As for now I've found no information referring hooks infrastructure changes, but maybe there are some known issues I missed? Is there anything against the upgrade I should know?
PS: Try and see what comes method is absolutely unavailable. I'd like the upgrade to be as fluent as possible. Repository users shouldn't even notice any changes. I can't allow myself any failure in that matter.
The Subversion compatibility guarantees promise that your hook scripts are called exactly the same in 1.6 as in 1.7. In 1.7 (and future versions) more arguments can be passed to scripts, but the old arguments still match the old behavior. So if you created your scripts like the templates, to ignore 'extra' arguments you shouldn't see a difference.
Subversion 1.7 didn't change the repository format since 1.6, so you can even (accidentally) use the svnlook from 1.6 to access the repository after upgrading.
Try and see what comes method is absolutely unavailable...
Yes, the try and see what comes method is available. You build a copy of your Subversion 1.6 environment, make the Subversion 1.7 changes, and test until everything is correct.
I don't see how you can accomplish your goal of a quiet upgrade unless you copy and test.
I guess it depends what you do with your hooks...
If your hooks are using svnlook, you should have no issues. If you're using an API (like the Python API), you probably are also okay as long as you're doing svnlook type of stuff.
Where you might start heading into problems is if you poked and prodded where you weren't suppose to poke and prod. For example, instead of doing svnlook, you do svn. There are a couple of places where the parameters have changed. Also, if you did an svn checkout (an absolute no-no in a hook) and then looked in the .svn directories, you'll get a surprise. Follow the rules, color in the lines, and your hooks won't have any issues.
I don't know of any issues from Revision 1.1 to revision 1.7 that should affect well behaved hooks hooks, and I suspect that you will not have any issues as long as we are still in Subversion 1.x. When Subversion 2.x comes out, all bets are off.
Yes, there have been some changes in how hooks work. The start-commit hook has an extra field that wasn't in versions 1.4 and earlier (The capabilities field), but nothing that would affect current hooks. And, in either Subversion 1.5 or 1.6, users now can set revision properties when doing a commit. These don't affect current hooks, but might be features that you want to incorporate in your current hooks.
The upgrade has been performed and succeeded. Subversion server was updated without issues. Hooks were designed without any hacks or slashes, respecting the rules and common sense. It was risky but promising and came out profitable (checkouts are light-speed now).
Just for sake of completeness: there was a consecutive centrally managed client upgrade. And there were issues, however not critical and predictable. After transition svn client 1.6 -> 1.7.7 working copy format changed. Every existing working copy had to be manually upgraded (or wiped out and checked out clean again).
Server upgrade is safe though.

Java EE - form authentication - how do I bypass the login during development?

I'm building a Java EE web application using JSF, Netbeans and Glassfish. I just built a standard form login for my application. The problem is that now every time I deploy the project, which is very frequent, it clears the authentication and I have to log in again.
I am new to Java EE so it is possible that this is a configuration problem, but from what I read this is normal behavior.
During the development cycle, what methods are there to handle this? I could disable the authentication during development but that just doesn't seem like a "good" solution.
Thanks
I had an epiphany about my problem. The simple answer is Test Driven Development.
I found myself developing by making a change to my application. Clean, Compile, and Deploy to Glassfish. I was finding this increasingly frustrating because of how slow that process is. In addition, I'm new to JavaEE development so I am working in small incremental steps.
The epiphany came when I saw and remembered reading about Test Driven Development from Kent Beck. I only made it to Chapter 6 but shelved it as other books took higher priority at the time. Now it's time to read it.
I highly recommend reading the book. Basically, the process is to build your unit tests first and use them to build the application.
Here is a link to the book I'm reading: http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530/ref=sr_1_1?ie=UTF8&qid=1324507172&sr=8-1

Resources