I'm currently trying to create an application like a Virtual Agent, using Watson Dialog. I have to use Node.js with this Watson service, but I have never used it before, so I took my time.
For now, I can use Java to call the dialog service to simulate an user. But I want to use Node.js to call Java to simulate the Agent.
In Watson Dialog, the Agent has a number of sentences written in a file like dialog.xml. But instead I want my Agent to ask specific questions according to the user's profile.
That's why I'm using a BRMS tool, written in Java. I created a .jar and want to call it in /public/demo.js to fill the variable response:
var texts = dialog.conversation.response;
var response = texts.join('<br/>');
I tried this in /public/demo.js:
var exec = require('child_process').exec;
var child = exec('java -jar C:\\PATH\\Example.jar',
function (error, stdout, stderr){
response += stdout;
if(error !== null){
console.log("Error -> "+error);
}
});
Using that code in another application, it works without any problem, I can run my .jar. I'm sure of it. But once written in my Bluemix app, the first line make it crash. Am I missing something in the manifest.yml file? Do I need to change a config? Or maybe it comes from the demo.js file?
Thank you for helping.
Related
I hope I'm saying this correctly. What I'm trying to do is write to a json file using fs.writeFile.
I can get it to work using the command line but what I want to do is call a function maybe a button click to update the json file.
I figure I would need some type of call to the node server which is local port 8080. I was researching and seen somebody mention using .post but still can't wrap my head around how to write the logic.
$(".button").on("click", function(event) {
fs.writeFile("./updateme.json", "{test: 1}", function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
});
Using jQuery along with fs? Wow that could be great! Unfortunately that is not as simple as that!
Let me introduce you to server-side VS client-side JavaScript. Well actually there are a lot of resources on the net about that - just google it, or check the answers to this other StackOverflow question. Basically JavaScript can run either on a browser (Chrome, Mozilla...) or as a program (usually a server written in NodeJS), and while the language is (almost) the same, both platforms don't have the same features.
The script that you're showing should run in a browser, because it's using jQuery and interacting with buttons and stuff (aka the DOM). Can you imagine what a mess it would be if that script could interact with the file system? Any page you'll visit will be able to crawl around in your holiday pictures and other personal stuff you keep on your computer. Bad idea! That is why some libraries like fs are not available in the browser.
Similarly, some libraries like jQuery are not available (or simply useless) in the server, because there is no HTML and user interaction, only headless programs running.
So, what can I do to write a JSON file after a user clicks on a button?
You can set up:
A NodeJS server that will write a JSON file
Make jQuery call this server with the data to be written after the user clicks on a button
If you want further guidelines on this, tell me in the comments! I'll be ready to edit my question so as to include instructions on setting up such an environment.
I've just started a PoC on service fabric for a project and I'm stuck on something that should have been straightforward but has become a real pain ...
What I need to do is host a console app returning some values in the service fabric and be able to call it on demand and get back its return value.
So I've created a simple exe looking like this, hosted in a GuestExe
static void Main(string[] args)
{
Console.WriteLine("123");
File.AppendAllText("hello.txt" + DateTime.Now.ToString(), "Hello world!");
//Console.ReadLine();
}
First, if the readline is commented, the explorer shows the guestExe as failing and the file is not created.
If I uncomment it, the guest exe will appear as being good in the explorer, and the console app will be started on it's own (despite I didn't ask for it), the file will be created, and I won't be able to call it.
cluster image
I've tried to just hit few things in my browser : the endpoint above, http://localhost:19081/PoC9/GuestExe?cmd=instance and same without the cmd argument.
If anyone has any idea of how that should be done, please help :)
I am developing an Alexa Skill, and I am struggling a bit in understanding if I setup everything in the best way possible to debug while developing.
Right now I am developing locally using Node.js, uploading to the cloud when ready, and testing all the responses to intents using the Service Simulator in the Test section of the developer console.
I find the process a bit slow but working... But still, I have two questions:
1) Is there a way of avoiding the process of uploading to the cloud?
And mostly important 2) How do I test advanced interactions, for examples multi-step ones, in the console? How for example to test triggering the response to an intent, but then asking the user for confirmation (Yes/No)? Right now the only way of doing it is using the actual device.
Any improvement is highly appreciated
Like #Tom suggested - take a look at bespoken.tools for testing skills locally.
Also, the Alexa Command Line Interface was recently released and it has some command line options you might look into.
For example, the 'api invoke-skill' command lets you invoke the skill locally via the command line (or script) so you don't have to use the service simulator. Like this...
$ask api invoke-skill -s $SKILL_ID -f $JSON --endpoint-region $REGION --debug
Here is a quick video I did that introduces the ASK CLI. It doesn't specifically cover testing but it will provide a quick intro.
https://youtu.be/p-zlSdixCZ4
Hope that helps.
EDIT: Had another thought for testing locally. If you're using node and Lambda functions, you can call the index.js file from another local .js file (example: test.js) and pass in the event data and context. Here is an example:
//path to the Lambda index.js file
var lambdaFunction = require('../lambda/custom/index.js');
// json representing the event - just copy from the service simulator
var event = require('./events/GetUpdateByName.json');
var context = {
'succeed': function (data) {
console.log(JSON.stringify(data, null,'\t') );
},
'fail': function (err) {
console.log('context.fail occurred');
console.log(JSON.stringify(err, null,'\t') );
}
};
function callback(error, data) {
if(error) {
console.log('error: ' + error);
} else {
console.log(data);
}
}
// call the lambda function
lambdaFunction.handler (event, context, callback);
Here's how I'm testing multi-step interactions locally.
I'm using a 3rd party, free, tool called BSTAlexa:
http://docs.bespoken.tools/en/latest/api/classes/bstalexa.html
It emulates Amazon's role in accepting requests, feeding them to your skill, and maintaining the state of the interactions.
So I start my test script by configuring BSTAlexa - pointing it to my skill config (eg. intents) and to a local instance of my skill (in my case I'm giving it a local URL).
Then I feed BSTAlexa a sequence of textual requests and verify that I'm getting back the expected responses. And I put all this in a Mocha script.
It works quite well.
Please find answers (Answering in reverse order),
You can test multiple steps using simulator (echosim.io) but each time you have to press and hold Mic button (Or hold on space bar). Say for example first you are asking something to Alexa with echosim and alexa responding to confirm'yes/no' then you have to press and hold mic button again to respond to confirm it.
You can automate the lambda deployment process. Please see the link,
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
It would be good to write complete unit tests so that you can test your logic before uploading Lambda. Also it will help to reduce the number of Lambda deployments
I am new to Sails/nodeJS. I am trying to create a system that will automatically run a .sh after a video is uploaded to the server (local file system). I understand from Sails documentation that file uploading is done in Controller file, and what I want to do is to trigger a .sh after that. I need to constantly read the output from .sh and return it to the backend.
Anyone has any idea how to implement this?
Thanks!
You always can use https://www.npmjs.com/package/exec
Simple implementation:
var exec = require('exec');
exec('PATH TO YOUR .SH FILE', function(err, out, code) {
// Process to the next action within the callback function.
});
In our nodejs project, there are several pages of debug info logged into one default console window for each HTTP request. It is very difficutlt to find out the info that is logged by me.
I am not at liberty to edit other part of the code in this project. I'd like to to create a new terminal window and only log my debug info to this window.
Question:
It is possible to do it using console.log along? If not, anyway to launch a new terminal window and log my info there? I think nodejs can control local file system and call OS functions. There must be a way.
Thanks!
You can create custom Console objects:
var Console = require('console').Console;
var fs = require('fs')
var stream = fs.createWriteStream('/tmp/mylog');
var console = new Console(stream, stream);
var i = 0;
(function log () {
console.log(i++);
setTimeout(log, 1000);
} ())
Then just do tail -f /tmp/mylog
Use a different log. Winston supports tons of log formats as well as multiple logging channels. Instead of sending your debug info to the console, send it to a file. You can read it later or tail -f it in real time.