So, I have text-to-speech turned on, plus the "enable beta features and API" on. On the DialogFlow web page where you can add intents and test them, the feature is working and I get a small audio control where I can hear the audio corresponding to the fullfilment text.
But, when I try to get the audio via the Java API, I'm not getting it. The code below will produce the following output:
>2018-07-30 02:26:53 mmcsrv.agent.SpeechIntentDetector: response_id
>
2018-07-30 02:26:53 mmcsrv.agent.SpeechIntentDetector: query_result
>
2018-07-30 02:26:53 mmcsrv.agent.SpeechIntentDetector: webhook_status
I'd expect to find output_audio field in there but it's not, so where is the audio ?
My Maven for this module:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-dialogflow</artifactId>
<version>0.53.0-alpha</version>
</dependency>
I tried 0.55.1-alpha but Maven says it doesn't exist. Not sure if not using the latest version would matter anyway.
Can someone help me ? If I can't get this to work, I'll have to send the text back to Google Cloud text-to-speech which I'm guessing will take more time than to have the audio data right there in Dialogflow's response.
Thanks.
// Details omitted for brevity...
// Build the DetectIntentRequest
DetectIntentRequest request = DetectIntentRequest.newBuilder()
.setSession(session.toString())
.setQueryInput(queryInput)
.setInputAudio(wav)
.build();
// Performs the detect intent request
DetectIntentResponse resp = sessionsClient.detectIntent(request);
List<FieldDescriptor> fields = resp.getDescriptorForType().getFields();
for (FieldDescriptor field : fields )
log.trace(field.getName());
To answer my own question,
Dialogflow does send the audio, but you need the correct protobuffer proxy to be able to get it. If you're using Maven like me, version 0.53.0-alpha of artifact google-cloud-dialogflow pulls version 0.18.0 of proto-google-cloud-dialogflow-v2beta1 which has a proxy that does not yet has text-to-speech support.
You need to add version 0.20.1 or above by adding the this snippet to your pom file:
<dependency>
<groupId>com.google.api.grpc</groupId>
<artifactId>proto-google-cloud-dialogflow-v2beta1</artifactId>
<version>0.20.1</version>
Once you do that, class DetectIntentResponse will have the method getOutputAudio() that will give you the audio data.
I have it working now.
Related
Just getting started with nlp.js, and I'd like to be able to test out some ideas with their Express API server package.
As far as I can tell, there's no way to "intervene" in the QnA bot exchange. For instance, if I'd like to format the output to contain the user's name or a time or whatever.
Say my corpus was a tsv file with:
some question \t welcome, #name
And I wanted to swap out that #name tag? Right now, I just get that string exactly as is.
In the conf.json:
"api-server": {
"port": 3000,
"serveBot": true
}
Maybe there's a pipeline logic to do that?
Can't seem to find a lot of reference material on available events in the pipeline or how to intercede in the WebChat flow out there.
I am a Python developer, but the circumstances of a project I am working on now, oblige me to find a solution in Node.js
This is the easy python code to send mail but, is there a google app engine way like this in nodejs without use an intermediary service like mailJet or sendGrid?
def send(recipient, sender, subject, body):
isHTML=True
print("recep: "+recipient)
logging.debug(u'Sending mail {} to {}'.format(subject,
unicode(recipient)).encode(u'utf-8'))
message = mail.EmailMessage(
sender=sender,
subject=subject,
to=recipient
)
if isHTML:
message.html = body
else:
message.body = body
message.check_initialized()
message.send()
Thank's for your understanding and help.
The simple example you posted uses the app engine specific Mail API, available only in the first generation standard environment (python 2.7, java 8, php 5.5 and go 1.9 - see the tabs in the referenced documentation page).
Node.js support was added only in the second generation standard environment, which has no such API available.
I'm a programmer who is just getting started working with groovy in Jira in order to automate some tasks.
I'm trying to write a custom listener script using the inline editor in Jira, but haven't gotten past trying to get a Hello World program to work.
I don't know if the script is running, and can't see any output, and I really need some help with figuring out how to debug the script, preferably through outputs to some kind of console (or even just by reading the Jira logs if necessary), just so that I can actually start trying to learn how to use this tool.
I'm working with the information HERE as a general guideline to start learning to work with the inline editor.
For a little more context, you can see another related question that I asked HERE.
I've set the debug level to DEBUG for the event which I'm attaching the listener, as shown in this screenshot, based on the information found HERE:
Here is a screenshot of the inline editor I'm working in in JIRA. In this screenshot, I'm just trying to output 'Hello', and have just clicked the 'Preview' button:
As you can see, in the 'Result' tab at the bottom of the screen, there is nothing of interest. The 'Logs' tab is also empty, and the 'Timing' tab just says 'Elapsed: 0 ms CPU time: 0 ms', so it seems like nothing if happening.
If I check the log on the server (in the file catalina.2017-10-13.txt), I see the following output:
13-Oct-2017 07:01:50.942 WARNING [http-nio-8080-exec-6] com.sun.jersey.spi.container.servlet.WebComponent.filterFormParameters A servlet request, to the URI http://somevmserver:8080/rest/scriptrunner-jira/latest/listeners/com.onresolve.scriptrunner.canned.jira.workflow.listeners.CustomListener/params, contains form parameters in the request body but the request body has been consumed by the servlet or a servlet filter accessing the request parameters. Only resource methods using #FormParam will work as expected. Resource methods consuming the request body by other means will not work as expected.
13-Oct-2017 07:02:26.740 WARNING [http-nio-8080-exec-12] com.sun.jersey.spi.container.servlet.WebComponent.filterFormParameters A servlet request, to the URI http://somevmserver:8080/rest/scriptrunner/latest/canned/com.onresolve.scriptrunner.canned.common.StaticCompilationChecker, contains form parameters in the request body but the request body has been consumed by the servlet or a servlet filter accessing the request parameters. Only resource methods using #FormParam will work as expected. Resource methods consuming the request body by other means will not work as expected.
13-Oct-2017 07:02:26.974 WARNING [http-nio-8080-exec-1] com.sun.jersey.spi.container.servlet.WebComponent.filterFormParameters A servlet request, to the URI http://somevmserver:8080/rest/scriptrunner-jira/latest/listeners/com.onresolve.scriptrunner.canned.jira.workflow.listeners.CustomListener/preview, contains form parameters in the request body but the request body has been consumed by the servlet or a servlet filter accessing the request parameters. Only resource methods using #FormParam will work as expected. Resource methods consuming the request body by other means will not work as expected.
This output doesn't mean a whole lot to me, but it seems apparent that it's being populated as a result of trying to preview the script.
I'm not getting any errors in the inline editor, and it's really simple code, so I don't think it's that.
The only other information I can include that I think is pertinent is that this is a test instance of Jira cloned from our production environment, and its base URL is still set to the URL of the prod environment. Not sure if that has any bearing, but I'm not really a Jira admin, just the programmer tasked with doing this, so I don't want to go fiddling around where I don't need to.
Thanks!
When using scriptrunner within jira, you'll need to import the logger to use the debugger or to output to the console. This can be done with the following:
// Enable debugger
import org.apache.log4j.Logger
import org.apache.log4j.Level
def log = Logger.getLogger("com.acme.CreateSubtask")
log.setLevel(Level.DEBUG)
And then, you'll be able to see the logged information using log.debug "hello"
To see your debug message "Hello" in the log, you must update a issue in your selected project. The Result, Logs and Timing Tabs at the bottom are useless in this view. Just trigger the Listener with a issue update in your selected project and search your debug message in the atlassian-jira.log file.
Hint: To view the Log in the browser you can use this jira app https://marketplace.atlassian.com/plugins/com.cps.lastLog/server/overview
I am using actions-on-google ApiAiAssistant node.js library with API.ai for designing my chatbot.
I have created a German API.ai Agent specifically for it. So, I need to get the locale value from the request to webhook to know which locale the request is coming from.
I have seen a method something like ApiAiAssistant.getLocale to get the locale information from the request but I am not able to find the specific one from the documentation.
Is the method deprecated? And how can I get the locale information from API.ai webhook request?
You're probably looking for the getUserLocale() method. https://developers.google.com/actions/reference/nodejs/AssistantApp#getUserLocale
For example:
const app = new ApiAiApp({request, response});
const locale = app.getUserLocale();
It returns the language/locale combination (such as "en-AU").
If you're just using the JSON object and not the API, you can find the value at originalRequest.data.user.locale. This is the same value returned by the method.
If you just want a non-standard language field that is returned by API.AI, you can use the lang field. This isn't available through the API, just by reading the JSON directly, and only contains language information - not locale information. On the other hand, lang is available if you're using it for multiple platforms, not just Actions on Google. (But if you're using it for other platforms - you probably don't want to be using the actions-on-google node.js library.)
Hi I have done with following steps to implement Universal Link for IOS.
1.My sub domain is npd.nowconfer.com, and my apple-app-site-association file contains,
{
"applinks": {
"apps": [],
"details": [
{
"appID":"R3UDJNSN2P.com.sampleUniversal.teledna",
"paths": ["*"]
}
]
}
}
this file is uploaded into my subdomain npd.nowconfer.com and its serveing over https.
2.I tested using AASA Validator i.e https://branch.io/resources/aasa-validator/#resultsbox and i got Test result as all pass.
you can see attached screenshot.
3.Now In app side,my colleague did configuration such as
Added the domain to Capabilities i.e applinks:nowconfer.com and applinks:npd.nowconfer.com
Handled Universal Links in app i.e in delegate like this
- (BOOL)application:(UIApplication *)application continueUserActivity:(NSUserActivity *)userActivity restorationHandler:(void (^)(NSArray *))restorationHandler {
NSURL *url = userActivity.webpageURL;
// handle url
}
4.my universalink is https://npd.nowconfer.com:5000/calendar/deeplink?url=nowconfer when i click on this link from email ,my app is not opening instead it is redirecting to app store(becasue server side request came handling to redirect app shore if app is not installed on device)
But when i tested universalink validator here https://search.developer.apple.com/appsearch-validation-tool ,i have got some error
Link to Application : Error no apps with domain entitlements
The entitlement data used to verify deep link dual authentication is from the current released version of your app. This data may take 48 hours to update.
I have seen lot of tutorials but not used anything for me.Can you guys help me to figure out what is happening here?
Universal Links have to be standard http:// or https:// links. This means they need to use the standard web ports, of which 5000 is not one. That is why your link is not working — it's not actually a valid Universal Link.
The Apple validator checks for some additional things, and is also somewhat unreliable. This particular error message is confusing, but it has nothing to do with whether your Universal Linking configuration is correct. What it actually means is Apple can't detect applinks: entitlements and 'proper' handling of passed-in link values in the version of your app that is currently live in the App Store. This is expected if you are just implementing Universal Links for the first time. You don't need to worry about this — a number of large and successful apps with working Universal Links implementations fail this step too.