JavaFx 2 - Self Contained Applications and their preferences, database, etc - javafx-2

Let say i have a cross-platform runnable application
This application create then read/write some data and preference in external files
Bundle hierarchy is as follow:
ApplicationFolder/application.jar
ApplicationFolder/database.odb
ApplicationFolder/config.xml
Whether it's on a Mac, Windows or Linux, the application knows that everything is next to her (ie: /database.odb or /config.xml)
Now comes the Self Contained Application feature provided by JavaFx 2
The application is embedded in .exe on Windows, .app on Mac and don't know yet about Linux...
As a Mac user i've tested it on Mac and saw that database.odb and config.xml are now created at the user root path
I thus agree that i should think of a cross-platform mechanism to save/read my application preferences regarding the operating system
But i'm not quite sure of what to do and how to do it (can't find any googling help either..)
On windows, the .exe is installed in a folder, so i guess i can keep the same behavior
On Mac, the .app is a folder and i should keep everything inside (how to get the .app path ?!)
Isn't there a built-in mechanism in Java/JavaFx ?
Thanks a lot for any comment, advice, documentation or else that you could give me
Badisi

There are many ways to do this. I have listed some of them here in no particular order. The recommended approach depends on the type of data being stored.
Java provides a couple of mechanisms (e.g. the properties API and the preferences API) for maintaining application preferences.
If your application is sophisticated enough to benefit from an database, then you might want to use Java EE or Spring, both of which have their own configuration mechanisms.
For read-only configuration, you can bundle the relevant files inside your application jar.
To store customized application configuration files or client application wide databases in relative to the application jar, write the required files at runtime. See How do I get the directory that the currently executing jar file is in?.
For user specific configuration, use System.getProperty("user.home") to retrieve the user's home directory, then create a subdirectory for your preference storage (for example "{$user.dir}/.myapp") with hidden file attributes so that it doesn't show up on a standard file directory list.
If your app relies on internet connectivity, then you can store some of this information server side rather than the client and make use of it from the client using internet protocols. An advantage of this approach is that user configuration and data is automatically ported across client machines.

Related

Is there a Node.js documentation explaining how to setup an application environment?

I'm looking for a documentation which describes the standard way of setting up a Node.js server. I'm wondering whether there is such a thing, actually.
I'm writing a Linux (Ubuntu) server and administrators of a standard server would find it normal to find the settings of the application under:
# Admin editable settings
/etc/<app-name>/<app-name>.conf
# Read-only files used by the server
/usr/lib/<app-name>/...
# Read-Write files used by the server
/var/lib/<app-name>/...
The <app-name>.conf file could be used to change the other paths.
Does the default Linux organization sound completely out of wack for a Node.js application?
IMPORTANT: I'm not in any way interested about how to read the .conf or what format it should be in. That part already works exactly as I want it to work. I'm only interested in where those files are expected to be installed when someone installs your Node.js service.

Standard log locations for a cross platform application

I'm developing a cross-platform desktop application for Mac, Linux and Windows. The application will create a plain-text log file to help with debugging, amongst other things. What are people's recommendations for a sensible place to store the log on each of the platforms?
Here is my guess so far, based on web searches:
Mac: ~/Library/Logs/MY-APP-NAME/system.log
Linux: ~/.MY-APP-NAME/logs/system.log
Windows: %APPDATA%\MY-APP-NAME\logs\system.log
For Linux, the XDG Base Directory Specification is followed by some applications. Log files are not specifically called out as such. You can put them either into a subdirectory of the data directory ($XDG_DATA_HOME or $HOME/.local/share), where they will not be deleted automatically, or you could use a subdirectory of the cache directory ($XDG_CACHE or $HOME/.cache). In the latter case, the files could be automatically expired after some time.

Using DirectXTK to save screenshots in Windows Store app (Metro)

I'm working on a C++ Windows Store DirectX app and I'm trying to save screenshots to disk every so often.
I am using the DirectX Tool Kit (DirectXTK) and the function SaveDDSTextureToFile which returns an HRESULT.
The problem is that the returned HRESULT is always:
E_ACCESSDENIED General access denied error.
I assume this is some permissions/capabilities thing (it being a windows store app) but I can't find what I need to ask for permission for to be able to save files to disk.
The DirectX ToolKit says it is for Windows store applications as well as desktop applications but I can't find any information on their codeplex either.
Does anyone know what I need to have permission to do for this to work?
Thanks for your time.
Windows Store apps are sandboxed and have fewer permissions than desktop apps, especially when it comes to file access. By default, apps only have access to write to the local storage directory, which isn't easily accessible from the shell. If you want to save to the Pictures or Documents library, you will need to specify this access in the package manifest. Additionally, you will need to use the WinRT file APIs to write the DDS files. To do this, use SaveDDSTextureToMemory, then write the resulting raw DDS data to the StorageFile. Check out the File access sample for more info on the WinRT APIs involved in writing this data as a file.
I've managed to find a way to do it. Basically as MooseBoys says you cannot save to anywhere because the app is sandboxed.
You can however save to the TempState folder of your apps package in AppData, which is all I need because I'm using this feature for debugging.
So the line I call is:
DirectX::SaveWICTextureToFile(deviceContext, texture2D, GUID_ContainerFormatPng, L"C:\\Users\\USERNAME\\AppData\\Local\\Packages\\PACKAGENAME\\TempState\\test.png");
And this works great.

Based on my requirements, should I use NSIS or jprofiler/install4j

We have a web application that we need to make easier to deploy for our clients.
The current workflow for a fresh install:
Ensure there is a JRE on machine (32 or 64bit)
Install Tomcat (32 or 64bit)
Create a database in Oracle or SQL Server (we provide SQL scripts for this)
Write some values into our settings table, like hostname. (Can get user to verify these, but dont want user to have to tap them in.
Create a connections properties file (we provide a mini JAR app to help with this) that will sit under Tomcat.
We have two WAR files for our actual web application. These can be split across two machines, but for now, lets assume they both get dumped under Tomcat.
Start Tomcat so that it deploys the WARs
This is a tedious process for our users
I want to encapsulate it into an installer and have been looking at doing this in NSIS which seems to have a large community, but then also stumbled across install4j, which although seems to be lesser known, is more specific to java based applications.
Just wanted to get some feedback from more experiennced users out there on the best choice for platform.
I do not want to get half way in, and then realise I have chosen the wrong installer platform.
Disclaimer: My company develops install4j.
First of all, install4j is a commercial tool, so that's a considerable difference to NSIS. Other major differences are:
install4j is a multi-platform installer builder for Windows, Mac OS X and all POSIX compatible Linux and Unix platforms.
install4j's main focus is for installing Java-based applications, for example it handles the creation of launchers and services and provides several strategies for bundling JREs. Many things that you need for a Java application will work out of the box.
install4j provides its own IDE which focuses on ease of use
Scripting is done in Java. The IDE provides a built-in editor with code-completion and error analysis. Actions, screens and form components have a wide range of "script properties" that allow you to customize the behavior of the installer.
For install4j, I can address your single requirements:
Ensure there is a JRE on machine (32 or 64bit)
In the media wizard, select a JRE bundle. If you select the "dynamic bundle" option, it will only be downloaded if no suitable JRE is found.
Install Tomcat (32 or 64bit)
I would recommend to simply add the root directory of an existing tomcat installation to your distribution tree.
As for the service, you can either use the Tomcat service launcher from the Tomcat distribution or create a service launcher in install4j. In both case you can use the "Install a service" action on order to install the service.
Generated services have the advantage that an update installer knows that they are running and automatically shuts them down before installing any new files.
Create a database in Oracle or SQL Server (we provide SQL scripts for this)
Use the "Run executable or batch file" action in order to run these scripts.
Write some values into our settings table, like hostname. (Can get user to verify these,
but dont want user to have to tap them in.
Any kind of user interaction is done with configurable forms. With a couple of text field form components you can query your settings.
This also works transparently in the console installer and the automatically generated response file will allow you to automate installations in unattended mode based on a single execution of the GUI installer.
Create a connections properties file (we provide a mini JAR app to help with this) that
will sit under Tomcat.
If you already have a JAR file which does that, just add it under Installer->Custom Code & Resources and add a "Run script" action to your installer to use the classes in your JAR file.
Any user input from form components that has been saved to installer variables can be accessed with calls like
context.getVariable("greetingOption")
in the script property of the "Run script" action (or any other script in install4j).
We have two WAR files for our actual web application. These can be split across two
machines, but for now, lets assume they both get dumped under Tomcat.
If you just add the Tomcat directory structure to your distribution tree, you can have these WAR file pre-deployed. Otherwise you can use "Copy file" actions to place the WAR files anywhere.
Start Tomcat so that it deploys the WARs
That's done with the "Start a service" action.

cgi-bin directory contents: What else can be stored there, apart from the CGI scripts/executables?

What files should/should not be stored in the cgi-bin folder/directory on a web server?
Obviously, executable scripts/files that make up a web application, called from a web browser can be stored there.
But is there a common industry opinion about what else can be stored there?
Is there a very strong reason why nothing else apart than the scripts/executables is allowed there?
My preference is to store all files belonging to an application in the cgi-bin directory/folder, as a subfolder off it - for each application.
For example directory cgi-bin/myapplication would contain:
the cgi scripts/executables
datafiles
configuration files
This simplifies installation and also simplifies the steps to run different versions of a application in parallel, e.g. for trialling a new version.
Concerns about security access to non-script files can be addressed by using the correct user permissions and also Apache .htaccess to control access to the directory and files.
It would seem that popular free applications are in favour of this everything-under-one-directory approach: The versions of bugzilla, the free defect and feature tracking tool, e.g. 3.4.4 are offered in this structure, while earlier versions, e.g. 2.x installed bugzilla components to at least three folders.
Drupal, the powerful and popular free content management system also takes this approach of everything-under-one-directory, albeit doesn't use the cgi-bin folder but the approach is the same.
What are your thoughts?
There is nothing special about the cgi-bin folder. It is like any publicly-accessible web folder that has the "allow-script" flag set (or the equivalent for your web server) - something that has become almost meaningless in the world of PHP/JSP and the likes.
You should only store files that you wish to be public in any folder under your webroot. You probably don't want your data and configuration to be downloadable by any user on the internet, so don't keep them in /cgi-bin
Certain servers may try and execute any file in /cgi-bin if requested. This could cause problems, especially if text or data files are executed as shell script.
Applications like Drupal are intended to be easy for anyone to install, regardless of what permissions they may have on their web-host. This is the main reason it keeps everything together. If you have the ability to put files where you want, it is always a good practise to keep non-public files outside of the webroot. If you must keep them under the webroot, then ensure that you use your server's configuration to deny public access to the non-public files.

Resources