How to process wro4j at startup? - wro4j

I have to use wro4j runtime solution. However, the first request to the server for the processed css file is very slow.
For production mode, I would like wro4j to generate it's files at application startup, to avoid the first slow request.
Here is my scenario, in case someone would advice me on an alternative approach :
I have a maven project which is built once (say generic.war) but customized for each hosted client (client1.war, client2.war etc).
For each client the appearance of the app can be overriden at different levels.
So I have a generic maven project and then another routine that unpack the war (generic.war), customize it by simply overwriting desired files, and repack it for a specific client (ie : client1.war).
This approach of generating specific wars by overwriting files is already in place and used all the time.
But now I want to use wro4j with this system. The first idea is to do the above, overwriting .less files from the generic files and rely on the runtime wro4j to do the final processing in the specifics wars (client1.war, client2.war etC).
But I don't want the first request to hang, I want the groups already in cache for the first request.
I saw this post, but it's a bit old now and I didn't find how to apply the recommended solution (no example and the part on how to trigger the processing from the ServletContextListener is not clear to me).
Thanks in advance :)

Related

Can SVG-Edit be made to work in a standalone/offline context?

Because SVG-Edit is such a unique and appealing program, I've been searching for an answer to this question for years, but have come up dry.
After a major struggle, I was able to get it to work by installing Windows IIS, then setting up a web server, etc. However, this is far from ideal.
Is there some reason why it won't (or shouldn't) run in a fully standalone/offline mode? Specifically, what I'd like to do is extract the GetHub zip file to a local folder, and open "svg-editor.html" in a browser. In general, this produces either a blank window, or (in some previous versions) a window with various missing items.
There had been a race condition which was causing svgedit to err, evident in Chrome when loading with file:// URLs, and now fixed in the master branch on Github.
You won't be able to load svg-editor-es.html locally from a file:// URL--svg-editor-es.html being the original source which relies on ES6 Modules to load its files but problematic as they are not permitted to load locally, causing origin errors to show in the console), but the svg-editor.html file (which is the backward compatible way to use svgedit) appears to be working now after the fix--at least for some basic functionality like making drawings.
Some functionality may not be possible to work, however, due to limitations related to limited permissions with file:// URLs, e.g., loading some images. (I seemed to recall browsers previously preventing files outside of their directory or child directories from loading files in parent directories, but this restriction does not seem to apply now, though there are some warnings I see about Ajax not being able to load some images which svgedit attempts to load.)
As such, even with the above-mentioned recent fix, it might not be possible to fully work offline, unless perhaps you opt to disable the security restrictions on your browser, something one should not do lightly. But it does appear to work for some basic drawings at least.
While I figure this may address your direct question about why it doesn't work without a server, there is also another approach to working "offline" which, though it would need a server to initially serve the files, may allow svgedit to store the application files to work completely offline the next time you visit that URL in the browser--and not run into problems with browser security restrictions. Browsers nowadays can work offline even when served from a server (done by something called "service workers"--see https://caniuse.com/#feat=serviceworkers for the browsers that support this).
Service workers are, however, not all that easy to cobble together, and though you should be able to track any future progress on this by subscribing to the issue at https://github.com/SVG-Edit/svgedit/issues/243 (as it is already a requested feature), there is no one currently undertaking to implement this at this time. Hopefully someone will be inspired to implement this.
By the way, if you install svgedit using "npm" (a tool which becomes available if you install Node), svgedit has a start script which you can invoke from the command line with npm start from within the svgedit folder, and that will run a local (Node) server for you, specifically a simple static file server which will simply allow you to load svgedit from http URLs (i.e., http://localhost:8000/editor/svg-editor.html or http://127.0.0.1:8000/editor/svg-editor.html; you can also use the ES6 Modules file if you are on a modern browser: http://localhost:8000/editor/svg-editor-es.html )--without your needing to install any other server.

Get all files located on a server?

I'm trying to find all of the (javascript) resources located on a specific site.
What would be a efficient way of finding them?
Everything I could think of is bruteforcing every possible name and check whether there's a file with this name at the server, although this isn't exactly that efficient.
Yes you can do this. The thing which you actually want to do is web directory traversal..
It is a kind of web vulnerability which is usually taken in to consideration by the web master so you get 403-Forbidden or 404-Not Found Error. Manual exploitation on this is surely possible with trial and error basis in case u get to know directory that contains .js files. For automation You can take use of Python/Perl for ease of use. I am personally working on a same project targeting the same objective using PHP and cURL. At very present I can not help about any source code but for sure I'll be posting same.

In IIS, how should environment/site-specific WEB.CONFIG settings be preserved, when using MSDeploy?

Background
I work in QA at a software company.
We have about a half a dozen different web applications, each of which may require, at any given site, some customised settings added to its web.config file.
These can range from which Oracle database/schema(s) the app connects to, to how many search results to cache, to which hierarchy to use when sorting items on a web page.
We make use of Microsoft's Deploy package, to get the new releases installed/updated on client sites.
When we put out a new release, some of these customised settings may have been added to or removed from the given web app's web.config file, but using Deploy to import the new release over the top of the old one will clobber any customisations that may have been made.
Alternatives
There are ways of handling this manually, such as merging via a plain text comparison of the old and new web.config files, but these are cumbersome and prone to human error.
I was reading about transformations and thought they could be of some use.
There is also the capability to use external files (tip #8) which seems like a good way to go.
Improvement?
Should our programmers be providing some sort of semi-automated merge facility for this web.config file? Does the Deploy package provide this somehow?
Should we be making use of the external config files, as a best practice?
Is the concept of customising this web.config file at each site so fundamentally flawed that the whole thing needs to be re-thought?
Microsoft provides Web.config transformations as the de-facto way to do this. You can create different deployment configurations within Visual Studio and the web projects. Then when you build or your build server builds with that particular configuration the web.config is transformed to contain the settings you want to see.
View more about web.config transforms here.

How to generate a javadoc in XPages

When I try to generate a javadoc, using the menu command Project\Generate Javadoc, the following warnings and error are produced for my custom classes in XPages:
javadoc: warning - No source files for package net.focul.utilties
javadoc: warning - No source files for package net.focul.workflow
javadoc: error - No public or protected classes found to document.
The packages are in the WebContent/WEB-INF/src folder which is configured in the build path and are selectable in the Generate Javadoc wizard. The classes are public with public methods.
Javadocs are generated for all of the Xpage and Custom Control classes if I select these.
You're experiencing this behavior because javadoc doesn't understand the Designer VFS (Virtual File System). It assumes that your project consists of a bunch of separate files in some folder structure on your local hard drive, not self-contained inside a single NSF. On the whole, the Designer VFS successfully tricks Eclipse into believing it's interacting with local files by intercepting read/write requests for project resources and importing/exporting DXL or CD records, etc. But apparently they haven't applied this sleight of hand to javadoc as well.
The Java source files corresponding to each XPage and Custom Control are processed successfully because, ironically, they are never stored in the NSF. During every project build, Designer discards any of these it has already generated and re-creates them based on the current contents of the various .xsp files. It then compiles those Java files into .class files, which are stored as design notes inside the NSF. At runtime, it's these files that are extracted from the VFS and executed... the source code no longer matters at this point, so there's no reason to ever bother including the .java files in the NSF, so they're just kept on the hard drive. One indication of this behavior is that the folder is named "Local" when viewed in Package Explorer / Navigator.
If you're using the built in (as of 8.5.3) version control integration (see this article for a great explanation of how to use this feature), you can tweak the Build Path to include the copy of the src folder stored in the on-disk project as a "linked source folder". This causes javadoc to consider the duplicate copies valid source files, and therefore includes them in the generated documentation. On the downside, it also causes Designer to consider them valid source files, which causes compilation errors due to the duplication. So this approach is only viable if you only need to generate the documentation on an infrequent basis, and can therefore break the Build Path temporarily just to run javadoc, then revert to the usual settings.
An alternative is to actually maintain your custom Java code this way on an ongoing basis: instead of creating the folder in WEB-INF inside the NSF, just create a folder on your hard drive that stores the source, then include that location as a linked source folder indefinitely. That way Designer can still find the source, but so can javadoc. NOTE: if you go this route, then you definitely need to use SCM. Because your source code no longer lives inside the NSF, providing the convenient container we're used to for getting the source code to other developers and ensuring inclusion in whatever backup schedule you use, the only place your source code now lives is on your local hard drive. So make sure you're regularly committing those files to Git / Subversion / Mercurial, etc., or, at the very least, storing them on some file server that is backed up regularly and, if applicable, accessible to all other members of the project team.
When you expand the net.focul.utilties in Designer, you will see all the methods and properties. But when you click on on of the methods, you will see neo source code.
So this is where javadoc fails to generate the documentation. I guess that the author of the application has not provided you with the source code. If you have the source somewhere, you can attach this code and then javadoc will be able to generate the documentation.
I run into the same situation and I have found the most straightforward method is to export the source to an external folder and then use regular Eclipse to generate the JavaDoc. Not sure my process is any less of a hassle than Tim's suggestions but for me it just feels less risky than trying to deal with the VFS vagaries.

How do I move ExpressionEngine (EE) to another server?

What are the best steps to take to prevent bugs and/or data loss in moving servers?
EDIT: Solved, but I should specify I mean in the typical shared hosting environment e.g. DreamHost or GoDaddy.
Bootstrap config is the smartest method (Newism has a free bootstrap config module). I think it works best on fresh installs myself, but ymmv.
If you've been given an existing EE system and need to move it, there are a few simple tools that can help:
REElocate: all the EE 2.x path and config options, in one place. Swap one URL for another in setup, check what's being set and push the button.
Greenery: Again, one module to rule them all. I've not used this but it's got a good rating.
So install, set permissions, move files and and DB, and then use either free module. If you find that not all the images or CSS instantly comes back online, check your template base paths (in template prefs) and permissions.
I'm also presuming you have access to the old DB. If not, and you can't add something simple like PHPMyAdmin to back it up, try:
Backup Pro(ish): A free backup module for files and db. Easy enough that you should introduce it to the site users (most never consider backups). All done through the EE CP. The zipped output can easily be moved to the new server.
The EE User Guide offers a reasonably extensive guide to Moving ExpressionEngine to Another Server and if you follow all of these steps then you will have everything you need to try again if any bugs or data loss occur.
Verify Server Compatibility
Synchronize Templates
Back-up Database and Files
Prepare the New Database
Copy Files and Folders
Verify File Permissions
Update database.php
Verify index.php and admin.php
Log In and Update Paths
Clear Caches
As suggested by Bitmanic, a dynamic config.php file helps with moving environments tremendously. Check out Leevi Graham's Config Bootstrap for a quick and simple solution. This is helpful for dev/staging/prod environments too!
I'd say the answer is the same as any other system -- export your entire database, and download all of your files (both system and anything uploaded by users - images, etc). Then, mirror this process by importing/uploading to the new server.
Before I run my export, I like to use the Deeploy Helper module to change all of my file paths in EE to the new server's settings.
Preventing data loss primarily revolves around the database and upload directories.
Does your website allow users to interact with the database? If so at some point you'll need to turn off EE to prevent DB changes. If not that you don't have too much to worry about as you can track and changes on the database end between the old and new servers.
Both Philip and Derek offer good advice for migrating EE. I've also found that having a bootstrap config file helps tremendously - especially since you can configure your file upload directories directly via config values now (as of EE2.4, I think).
For related information, please check out the answers to this similar Stack Overflow question.

Resources