How to load txt file in haxe in js project? - haxe

I've started a haxe js project in FlashDevelop, I need to load a local file, is this possible? how to to so?

The simple answer is use "resources". You add a path and an identifier to your hxml:
-resource hello_message.txt#welcome
And you use it in your code like this:
var welcome = haxe.Resource.getString("welcome");
Note that the operation is performed at compile time so there is no runtime overhead. It is essentially equivalent to embed the file content in a quoted string.
The complex answer is to use a macro. With them you can load, parse, process and do all the manipulation you might need. Pretty commonly, you can see macros to load a config file (say JSON or YAML) and use it as part of your application (again at compile time and not at runtime).

You could grab files with an XMLHttpRequest as long as you keep them somewhere public (if you're putting it online) and accessible to the script.
Here's a quick example of grabbing a text file from the location assets/test.txt
This is the sort of thing I usually do in the JS games I make, I find it a bit more flexible than just embedding them with -resource.
If it's not exactly what you're looking for then Franco's answer should see you through.
package ;
import js.html.XMLHttpRequest;
import js.html.Event;
class Start {
static function main() {
var request = new XMLHttpRequest();
// using the GET method, get the file at this location, asynchronously
request.open("GET", "assets/test.txt", true);
// when loaded, get the response and trace it out
request.onload = function(e:Event){
trace(request.response);
};
// if there's an error, handle it
request.onerror = function(e:Event) {
trace("error :(");
};
// send the actual request to the server
request.send();
}
}

Related

Where should I put custom errors in sails.js?

I was wondering what's the best practice and if I should create:
a directory in which declare statically all the errors my application uses, like api/errors/custom1Error
declare them directly inside the files
or put the files directly inside the dir that needs that error, like api/controller/error/formInvalidError
other options!?
A neat way of going about this would be to simply add the errors as custom responses under api/responses. This way even the invocation becomes pretty neat. Although the doc says you should add them directly in the responses directory, I'm sure there must be a way to nest them under, say, responses/errors. I'll try that out and post an update in a bit.
Alright, off a quick search, I couldn't find any way to nest the responses, but you can use a small workaround that's not quite as neat:
Create the responses/errors directory with all the custom error response handlers. Create a custom response and name it something like custom.js. Then specify the response name while calling res.custom().
I'm adding a short snippet just for illustration:
api/responses/custom.js:
var customErrors = {
customError1: require('./errors/customError1'),
customError2: require('./errors/customError2')
};
module.exports = function custom (errorName, data) {
var req = this.req;
var res = this.res;
if (customErrors[errorName]) return customErrors[errorName](req, res, data);
else return res.negotiate();
}
From the controller:
res.custom('authError', data);
If you don't need logical processing for different errors, you can do away with the whole errors/ directory and directly invoke the respective views from custom.js:
module.exports = function custom (viewName, data) {
var req = this.req;
var res = this.res;
return res.view('errors/' + viewName, data);//assuming you have error views in views/errors
}
(You should first check if the view exists. Find out how on the linked page.)
Although I'm using something like this for certain purposes (dividing routes and so on), there definitely should be a way to include response handlers defined in different directories. (Perhaps by reconfiguring some grunt task?) I'll try to find that out and update if I find any success.
Good luck!
Update
Okay, so I found that the responses hook adds all files to res without checking if they are directories. So adding a directory under responses results in a TypeError from lodash. I may be reading this wrong but I guess it's reasonable to conclude that currently it's not possible to add a directory there, so I guess you'll have to stick to one of the above solutions.

Dart redstone web application

Let's say I configure redstone as follows
#app.Route("/raw/user/:id", methods: const [app.GET])
getRawUser(int id) => json_about_user_id;
When I run the server and go to /raw/user/10 I get raw json data in a form of a string.
Now I would like to be able to go to, say, /user/10 and get a nice representation of this json I get from /raw/user/10.
Solutions that come to my mind are as follows:
First
create web/user/user.html and web/user/user.dart, configure the latter to run when index.html is accessed
in user.dart monitor query parameters (user.dart?id=10), make appropriate requests and present everything in user.html, i.e.
var uri = Uri.parse( window.location.href );
String id = uri.queryParameters['id'];
new HttpRequest().getString(new Uri.http(SERVER,'/raw/user/${id}').toString() ).then( presentation )
A downside of this solution is that I do not achieve /user/10-like urls at all.
Another way is to additionally configure redstone as follows:
#app.Route("/user/:id", methods: const [app.GET])
getUser(int id) => app.redirect('/person/index.html?id=${id}');
in this case at least urls like "/user/10" are allowed, but this simply does not work.
How would I do that correctly? Example of a web app on redstone's git is, to my mind, cryptic and involved.
I am not sure whether this have to be explained with connection to redstone or dart only, but I cannot find anything related.
I guess you are trying to generate html files in the server with a template engine. Redstone was designed to mainly build services, so it doesn't have a built-in template engine, but you can use any engine available on pub, such as mustache. Although, if you use Polymer, AngularDart or other frameowrk which implements a client-side template system, you don't need to generate html files in the server.
Moreover, if you want to reuse other services, you can just call them directly, for example:
#app.Route("/raw/user/:id")
getRawUser(int id) => json_about_user_id;
#app.Route("/user/:id")
getUser(int id) {
var json = getRawUser();
...
}
Redstone v0.6 (still in alpha) also includes a new foward() function, which you can use to dispatch a request internally, although, the response is received as a shelf.Response object, so you have to read it:
#app.Route("/user/:id")
getUser(int id) async {
var resp = await chain.forward("/raw/user/$id");
var json = await resp.readAsString();
...
}
Edit:
To serve static files, like html files and dart scripts which are executed in the browser, you can use the shelf_static middleware. See here for a complete Redstone + Polymer example (shelf_static is configured in the bin/server.dart file).

Best way to reuse a large translation file within Node / Express

I'm new to Node but I figured I'd jump right in and start converting a PHP app into Node/Express. It's a bilingual app that uses gettext with PO/MO files. I found a Node module called node-gettext. I'd rather not convert the PO files into another format right now, so it seems this library is my only option.
So my concern is that right now, before every page render, I'm doing something like this:
exports.home_index = function(req, res)
{
var gettext = require('node-gettext'),
gt = new gettext();
var fs = require('fs');
gt.textdomain('de');
var fileContents = fs.readFileSync('./locale/de.mo');
gt.addTextdomain('de', fileContents);
res.render(
'home/index.ejs',
{ gt: gt }
);
};
I'll also be using the translations in classes, so with how it's set up now I'd have to load the entire translation file again every time I want to translate something in another place.
The translation file is about 50k and I really don't like having to do file operations like this on every page load. In Node/Express, what would be the most efficient way to handle this (aside from a database)? Usually a user won't even be changing their language after the first time (if they're changing it from English).
EDIT:
Ok, I have no idea if this is a good approach, but it at least lets me reuse the translation file in other parts of the app without reloading it everywhere I need to get translated text.
In app.js:
var express = require('express'),
app = express(),
...
gettext = require('node-gettext'),
gt = new gettext();
Then, also in app.js, I create the variable app.locals.gt to contain the gettext/translation object, and I include my middleware function:
app.locals.gt = gt;
app.use(locale());
In my middleware file I have this:
mod
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
var fs = require('fs');
req.app.locals.gt.textdomain(lang);
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.gt.addTextdomain(lang, fileContents);
next();
};
};
It doesn't seem like a good idea to assign the loaded translation file to app, since depending on the current request that file will be one of two languages. If I assigned the loaded translation file to app instead of a request variable, can that mix up users' languages?
Anyway, I know there's got to be a better way of doing this.
The simplest option would be to do the following:
Add this in app.js:
var languageDomains = {};
Then modify your Middleware:
module.exports = function locale()
{
return function(req, res, next)
{
// do stuff here to populate lang variable
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs');
var fileContents = fs.readFileSync('./locales/' + lang + '.mo');
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
req.textdomain = req.app.locals.gt.textdomain(lang);
next();
};
};
By checking if the file has already been loaded you are preventing the action from happening multiple times, and the domain data will stay resident in the server's memory. The downside to the simplicity of this solution is that if you ever change the contents of your .mo files whilst the server is running, the changes wont be taken into account. However, this code could be extended to keep an eye on the mtime of the files, and reload accordingly, or make use of fs.watchFile — if required:
if ( !req.app.locals.languageDomains[lang] ) {
var fs = require('fs'), filename = './locales/' + lang + '.mo';
var fileContents = fs.readFileSync(filename);
fs.watchFile(filename, function (curr, prev) {
req.app.locals.gt.addTextdomain(lang, fs.readFileSync(filename));
});
req.app.locals.languageDomains[lang] = true;
req.app.locals.gt.addTextdomain(lang, fileContents);
}
Warning: It should also be noted that using sync versions of functions outside of server initialisation is not a good idea because it can freeze the thread. You'd be better off changing your sync loading to the async equivalent.
After the above changes, rather than passing gt to your template, you should be able to use req.textdomain instead. It seems that the gettext library supports a number of requests directly on each domain object, which means you hopefully don't need to refer to the global gt object on a per request basis (which will be changing it's default domain on each request):
Each domain supports:
getTranslation
getComment
setComment
setTranslation
deleteTranslation
compilePO
compileMO
Taken from here:
https://github.com/andris9/node-gettext/blob/e193c67fdee439ab9710441ffd9dd96d027317b9/lib/domain.js
update
A little bit of further clarity.
Once the server has loaded the file into memory the first time, it should remain there for all subsequent connections it receives (for any visitor/request) because it is stored globally and wont be garbage collected — unless you remove all references to the data, which would mean gettext would need to have some kind of unload/forget domain method.
Node is different to PHP in that its environment is shared and wraps its own HTTP server (if you are using something like Express), which means it is very easy to remember data globally as it has a constant environment that all the code is executed within. PHP is always executed after the HTTP server has received and dealt with the request (e.g. Apache). Each PHP response is then executed in its own separate run-time, which means you have to rely on databases, sessions and cache stores to share even simple information and most resources.
further optimisations
Obviously with the above you are constantly running translations on each page load. Which means the gettext library will still be using the translation data resident in memory, which will take up processing time. To get around this, it would be best to make sure your URLs have something that makes them unique for each different language i.e. my-page/en/ or my.page.fr or even jp.domain.co.uk/my-page and then enable some kind of full page caching using something like memcached or express-view-cache. However, once you start caching pages you need to make certain there aren't any regions that are user specific, if so, you need to start implement more complicated systems that are sensitive to these areas.
Remember: The golden rule of optimisation, don't do so before you need to... basically meaning I wouldn't worry about page caching until you know it's going to be an issue, but it is always worth bearing in mind what your options are, as it should shape your code design.
update 2
Just to illustrate a bit further on the behaviour of a server running in JavaScript, and how the global behaviour is not just a property of req.app, but in fact any object that is further up the scope chain.
So, as an example, instead of adding var languageDomains = {}; to your app.js, you could instantiate it further up the scope of wherever your middleware is placed. It's best to keep your global entities in one place however, so app.js is the better place, but this is just for illustration.
var languageDomains = {};
module.exports = function locale()
{
/// you can still access languageDomains here, and it will behave
/// globally for the entire server.
languageDomains[lang]
}
So basically, where-as with PHP, the entire code-base is re-executed on each request — so the languageDomains would be instantiated a-new each time — in Node the only part of the code to be re-executed is the code within locale() (because it is triggered as part of a new request). This function will still have a reference to the already existing and defined languageDomains via the scope chain. Because languageDomains is never reset (on a per request basis) it will behave globally.
Concurrent users
Node.js is single threaded. This means that in order for it to be concurrent i.e. handle multiple requests at the "same" time, you have to code your app in such a way that each little part can be executed very quickly and then slip into a waiting state, whilst another part of another request is dealt with.
This is the reason for the asynchronous and callback nature of Node, and the reason to avoid Sync calls whilst your app is running. Any one Sync request could halt or freeze execution of the thread and delay handling for all other requests. The reason why I state this is to give you a better idea of how multiple users might interact with your code (and global objects).
Basically once a request is being dealt with by your server, it is it's only focus, until that particular execution cycle ends i.e. your request handler stops calling other code that needs to run synchronously. Once that happens the next queued item is dealt with (a callback or something), this could be part of another request, or it could be the next part in the current request.

JavaScript Library XPages

I currently have JavaScript dotted around my XPages - I would like to create a JavaScript library containing all this code and then just call the individual functions e.g. the press of a button. Here is an example of the code I would like in the JavaScript library:
// get the user document for that person
var myView:NotesView = database.getView("xpBenutzerInnen");
var query = new java.util.Vector();
query.addElement(sessionScope.notesName);
var myDoc:NotesDocument = myView.getDocumentByKey(query, true);
When I place this code in the library I get the error:
Syntax error on token ":", ; expected
I assume that the "var name:type" declaration is specific to XPages (this is what I found on creating JavaScript vars: http://www.w3schools.com/js/js_variables.asp) - I could just remove the type declaration and the code seems to run without any problems - I would like to better define the variable type though.
Is there any way that I can move the code out of the XPage but still keeping my typing?
Thanking you in advance
Ursus
You need to distinguish between client side JavaScript and Server side JavaScript. Your code is server side JS and should work in a Script library just as it does inside an XPage. Have you accidentally created a client side JS lib?
A few pointers: I try to make functions in a script library independent from global objects. Mostly the database object. This function works in a library just fine:
function getUserDocument(username:string, db:database) {
var myView:NotesView = db.getView("xpBenutzerInnen");
var query = new java.util.Vector();
query.addElement(username);
var myDoc:NotesDocument = myView.getDocumentByKey(query, true);
myView.recycle();
return myDoc;
}
Let me know how it goes

Which nodejs library should I use to write into HDFS?

I have a nodejs application and I want to write data into hadoop HDFS file system. I have seen two main nodejs libraries that can do it: node-hdfs and node-webhdfs. Someone have tried it? Any hints? Which one should I use in production?
I am inclined to use node-webhdfs since it uses WebHDFS REST API. node-hdfs seem to be a c++ binding.
Any help will be greatly appreciated.
You may want to check out webhdfs library. It provides nice and straightforward (similar to fs module API) interface for WebHDFS REST API calls.
Writing to the remote file:
var WebHDFS = require('webhdfs');
var hdfs = WebHDFS.createClient();
var localFileStream = fs.createReadStream('/path/to/local/file');
var remoteFileStream = hdfs.createWriteStream('/path/to/remote/file');
localFileStream.pipe(remoteFileStream);
remoteFileStream.on('error', function onError (err) {
// Do something with the error
});
remoteFileStream.on('finish', function onFinish () {
// Upload is done
});
Reading from the remote file:
var WebHDFS = require('webhdfs');
var hdfs = WebHDFS.createClient();
var remoteFileStream = hdfs.createReadStream('/path/to/remote/file');
remoteFileStream.on('error', function onError (err) {
// Do something with the error
});
remoteFileStream.on('data', function onChunk (chunk) {
// Do something with the data chunk
});
remoteFileStream.on('finish', function onFinish () {
// Upload is done
});
Not good news!!!
Do not use node-hdfs. Although it seems promising, it is now two years obsolete. I've tried to compile it but it does not match the symbols of current libhdfs. If you want to use something like that you'll have to make your own nodejs binding.
You can use node-webhdfs but IMHO there's not much advantage on that. It is better to use an http nodejs lib to make your own requests. The hardest part here is try to hold the very async nature of nodejs, since you might want first to create a folder, and then after successfully create it, create a file and then, at last, write or append data. Everything through http requests that you must send and wait the for answer to then go on....
At least node-webhdfs might be a good reference to you take a look and start your own code.
Br,
Fabio Moreira

Resources