Default Configuration Files Hadoop Accelerator - gridgain

I just downloaded the Hadoop Accelerator from the gridgain.org website (gridgain-hadoop-os-6.2.0-nix.zip). When looking at the documentation that is located at http://doc.gridgain.org/latest/Hadoop+Accelerator+Installation, it mentions the following in the "Configuring GGFS" section:
Configuration files for GGFS client and data nodes are located in GridGain config folder:
•config/ggfs/default-ggfs-base.xml - contains common configuration for GGFS client and data nodes
•config/ggfs/default-ggfs-client.xml - contains configuration for GGFS client nodes
•config/ggfs/default-config.xml - contains configuration for GGFS data nodes
You can get started with default configuration and then change individual properties as you progress.
The config directory only contains the default-config.xml.
Where can I get the other 2 files or example configs?
Thanks,
Keith

The downloaded installation should have a "docs/hadoop_readme.pdf" file with up-to-date installation instructions. The instructions on the Wiki will be updated soon.

Related

How to configure hypersonic db for Liferay portal 7?

I used the configs/dev/local/portal-ext.properties file and this contains:
# Hypersonic
#
jdbc.default.driverClassName=org.hsqldb.jdbcDriver
jdbc.default.url=jdbc:hsqldb:c:/data/hsql/lportal
jdbc.default.username=
jdbc.default.password=
Still when I add content to the site none of the lportal files have been changed in the c:/data/hsql/ folder?
The default configuration uses Hypersonic and creates the lportal folder inside LIFERAY_HOME/data path, see portal.properties
But in some Liferay 7 testing I realized that lportal files are not written to file until you stop Liferay.
So try stoping Liferay and check the files after that.
A last warning: Hypersonic is not recommended for production use. My advice: never use Hypersonic, install Postgresql or MySQL and configure Liferay to connect to them.

How to measure Cassandra performance in JMeter

I want to use JMeter to measure Cassandra response time to some queries and multiple simultaneous accesses.
I'm trying to follow the instructions on this page:
https://github.com/slowenthal/jmeter-cassandra
First, I unpack the archive in the JMeter install directory. However when I try to access the plugin Cassandra through JMeter I can not find it.
Does anyone know if I am follow the appropriate instructions? Or what can I do for everything work properly?
If you unpacked the archive into correct location it should be enough to restart JMeter
If you're trying to manually copy jars from cassandra release bundle - the best locations are:
for jmeter-cassandra-x.x.x.jar - JMeter's /lib/ext folder
for all other jars - JMeter's /lib folder
Third option is to set user.classpath property in user.properties file to point to the folder with the Cassandra jars. Again, JMeter restart will be required to pick the property value up.
And finally you can use Add directory or jar to classpath section of Test Plan
Latest JMeter 2.13 r1665067 seems to be working fine with jmeter-cassandra-0.9.2

Play! 2.3.x Customizing build.sbt for Linux (ubuntu server) production distribution

In a previous question I wondered how to prevent conf files to be packaged inside zip distribution file (actually, inside the app jar itself), since the same set of conf files are packed as <app zip>/conf. When the package is installed (unzipped) in the server, conf files are both visible and accessible to be modified, but hidden by the packed copies inside the jar.
There is a simple way to address these exposed conf files, by passing Dconfig.resoure cl param (ei,Dconfig.resoure=conf/application.conf), but I think the duplicity of conf files as mentioned is confusing.
So, I'm looking for some build.sbt customization to accomplish:
prevent including conf files in app jar (unless someone can explain me why is this the default procedure, maybe I'm missing something here)
prevent including certain files in assets jar (ei, public/javascript/exclude-me/*), but include them in zip file in order to be accessible for customization in the production server (when unzipped, just like the mentioned conf files)

Standard location for external web (Grails) application config files on linux

Is there a standard location on Linux (Ubuntu) to place external config files that a web application (Grails) uses?
UPDATE: Apparently, there is some confusion to my question. The way Grails handles config files is fine. I just want to know if there is a standard location on linux to place configuration files. Similar to how there is a standard for log files (/var/log). If it matters, I'm talking about a production system.
Linux configuration files typically reside in /etc. For example, apache configuration files live in /etc/httpd. Configuration file not associated with standard system packages often live in /usr/local/etc.
So I'd suggest /usr/local/etc/my-grails-app-name/. Beware that this means you can't run two different configurations of the same app on the same server.
I don't believe there is a standard location. You usually define the location for your external config files via the grails.config.locations property in config.groovy.
EDIT
After reading your comment, I suppose the standard locations would be:
Somewhere on the classpath
OR
In the .grails folder in your home directory.
As these are the defaults in config.groovy file.
grails.config.locations = [ "classpath:${appName}-config.properties",
"classpath:${appName}-config.groovy",
"file:${userHome}/.grails/${appName}-config.properties",
"file:${userHome}/.grails/${appName}-config.groovy"]
There's a plugin Standardized external configuration for your app which you might find useful if the grails.config.locations parameter is insufficient.

Rendering Spark views in Windows Azure ASP.NET MVC3 web app

I've built a web application in ASP.NET MVC3 with Spark 1.5 view engine - works fine running on my local development machine, but when hosted on Windows Azure it can't find the Spark Views. I get the following standard error screen:
The view 'Logon' or its master was not found or no view engine supports the searched locations. The following locations were searched:
~/Views/Account/Logon.aspx
~/Views/Account/Logon.ascx
~/Views/Shared/Logon.aspx
~/Views/Shared/Logon.ascx
~/Views/Account/Logon.cshtml
~/Views/Account/Logon.vbhtml
~/Views/Shared/Logon.cshtml
~/Views/Shared/Logon.vbhtml
Account\Logon.spark
Shared\Logon.spark
Seems to me that Spark is not searching the same folders as WebForms/Razor (since no ~/Views prefix), but I can't find where this is configured in Spark.
I've tried adding the following to the startup code:
settings.AddViewFolder( ViewFolderType.VirtualPathProvider,new Dictionary<string, string> { { "virtualBaseDir", "~/Views/" } } );
...but no change. Can't help feeling there's something blindingly obvious I'm missing.
You shouldn't need to add a ~/Views/ virtual path provider, that happens automatically by convention and the search paths above are just the output of the two view engines (Razor and Spark) differing slightly. Spark has a root view path of Views already it when it says Account\Logon.spark it is already in the Views folder.
I have a feeling that your spark views are not actually getting copied up to Azure when you package and deploy. It's similar to the MVC3 dlls before they were up there, you had to set them to copy locally to ensure Azure had access to them.
If you rename the Azure package to a .zip file and open it up to see if the views have been included as part of the content. If not, then try highlight one of the Spark files in Solution Explorer and check the Properties. Set the Copy to Output Directory to Copy Always and build and repackage your Azure project.
Your local bin folder in the project should also now have a Views Folder with the Spark views contained for verification.
Try and upload that package and see if it does the trick?
Hope that helps,
Rob

Resources