How can I specify waypoints when starting the navigation intent? - google-gdk

I can start the navigation to a latitude/longitude point with no problems like this:
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse("google.navigation:q=40.774472,-73.970304&mode=w"));
startActivity(intent);
How can I add waypoints? Or how can I guide the user through the exact route I need them to go?
What other parameters besides q can I use?
What maps interface is being used here? Is it documented anywhere?
I've tried added some standard maps arguments and it doesn't work. Thanks.

You might want to use the Google Maps APIs to do what you want. Here is a link directly to the navigation with waypoints documentation (you can go back up to see all the documentation for the maps APIs).

Related

How can I track detail view opens in Bixby?

I need to track which detail pages are opened and how often in result sets.
To do my own logging, I would need to send a javascript put command when the user opens the detail page.
How can I do that?
This is not possible right now in a straightforward way.
If you're feeling adventurous, you can try using lazy-source:
https://bixbydevelopers.com/dev/docs/reference/type/structure.property.lazy-source
Make sure that only the details view of your concept is using the lazy-property, so it isn't loaded on the list-of summaries, since as stated in the docs,
Bixby calls the lazy source when the property is referenced either in a layout or dialog.

Dialogflow - Repeat last sentence (voice) for Social Robot Elderly

I am using actions-on-google and Dialogflow to build a social robot for Elderly.
I was wondering how I can easily repeat the last sentence when asked by the user ("repeat please") as often the Senior doesn't hear the sentence the first time.
One way would be to have repeat followup intents in Dialogflow but this is quite heavy since :
you need to add one after each intent and I have many
in a multi-user environment you need to keep track of the last sentence for every user ...
Another way would be be to take advantage of Dialogflow Contexts. As you send the message, you can also add that message to a context (for example, you can call it "last_message"). You can then have another Intent that takes as an input context the "last_message" context and, if triggered, uses the value saved in the context to repeat it.
However, I still have the problem that I need to add a context to every intent I have, which are many.
Does anyone know how to accomplish this in a quicker way? I found this package but it is in JS and I need it Python: https://github.com/SysCoder/VoiceRepeater/pulls .
How do I implement this VoiceRepeater library? Do I put the code under fulfillment function 'repeat' I have made and mapped to an intent called 'repeat' that I have made which responds to utterances such as 'Sorry, could you repeat that'? Also, where do I install the VoiceRepeater library (code: npm install voice-repeater --save)?
Using Followup Intents is probably the wrong way to do this. As you note, it is way too heavy for more than a few Intents. It may be useful in certain circumstances if you want the "repeated" message to clarify the response in a different way, but in general, it isn't very useful. (It should also be noted that Followup Intents use Contexts, but in a different way than discussed below.)
You don't need to add the Context to the UI as part of the Outgoing Context - you would set this as part of your Fulfillment. It would include a parameter that either contained exactly what you said, or the information you needed to recreate what you said (possibly in a different form, if appropriate). In your "repeat" Intent, you'd read the value that you had saved in this Context, and send it as the output again. If you're using SSML, you may wish to change the speed or volume, if that is appropriate.
Update based on new questions
The readme for VoiceRepeater has the basics of what you need to do to use it, but it does assume a little familiarity with Node. But in general, yes, you install it the way you describe, setup an Intent that captures requests to repeat, and registers a handler function (repeatLastStatement(app) in the readme) that handles the Intent to send a reply through voiceRepeater.lastPromptWithPrefix().
It also may assume you're using version 1 of the actions-on-google library. I haven't dug too deeply into the code, but it looks like it replaces the library's ask function with its own, and I'm not sure how well that works with version 2 of the actions-on-google library.
Unlike Voice Repeater, multivocal doesn't require you to register handlers specifically since it tries to hide as much boilerplate under the covers. You just need to define the replies that you might want it to use. It uses the Context scheme I outline above to store responses and make them available when the user asks for it to be repeated.
There aren't any videos on using multivocal, but the simple example does include the configuration illustrating how to configure responses for the "multivocal.repeat" Intent. While VoiceRepeater works with the actions-on-google library, multivocal is a complete replacement, offering a more template-based approach to building fulfillment.
However, neither of these directly help you if you want to implement it for Python. But if you look at the source for VoiceRepeater, you can get a sense for how to implement it yourself in Python.
The key bit is on line 47 where it saves the reply in a context. (It also saves the reply with a prefix message.) It then calls the original function that would send the reply:
app.setContext("last_prompt", 100,
{
"last_prompt": textToSpeech,
"prefixed_last_prompt": repeatPrefix + lastStatement,
});
originalAsk(response);
Later, in the call to lastPromptWithPrefix(), it uses the contents of the Context to send a reply.
lastPromptWithPrefix() {
return this.app.getContext("last_prompt") !== null
? this.app.getContextArgument("last_prompt", "prefixed_last_prompt").value
: "um....I don't remember what I said!";
}

Flutter Search Delegate Architecture (code structure best practices)

I am relative new to flutter. I was wondering the proper approach in order to implement an AppBar search by using the Search Delegate in Flutter. I read various articles on how to do that. However, with dummy data (used in examples) and just no real world scenarios (code structure) there is no hustle.
My use case consists of
AppBar (home widget - where the search button exists)
One tab (other widget - having a service call to DB at init)
Another tab (other widget -having a service call to DB at init)
My issue is that I want the search to take place at the results lets say of the first tab. So somehow, I have to pass the values return from the service up to my Home widget and then to search Delegate .
I do not know which is the proper way to do that.
InheritedModel/ InheritedWidget ?
passing via the constructors from one widget to another (then I will have tight widget connections and I do not want that)
Some other way using services ?
if any other solution ?
I want the solution to be scalable (as much as possible), in order to make adjustments at near future or add new functionality.
Thanks in advance for your time.
Update
I tried the InheritedModel/ InheritedWidget. For some reason, when I tried to access the data from the build method in Delegate I was receiving null object for inherited object. Probably, I was doing something wrong...I will keep trying...
Adding an image in order to clarify the problem with my app structure...
Do you already know this tutorial?
https://www.youtube.com/watch?v=Wm3OiFBZ2xI
I think, it is exactly what you need. My App uses Data from the Cloud Firestore and the search function is implemented like the search function in this tutorial. It works very well.

Can I associate custom metadata with an ALAsset?

I'm building an iPhone app that, among other things, allows the user to take and store photographs associated with locations. I am currently using the ALAssetLibrary to allow the photographs to be stored in Photos and be accessible outside the app (on a computer for instance via the built-in mechanisms). There is not a lot of technical content out there for working with the ALAssetLibrary but from what there is I have managed to cobble together a working version of this. I have had to resort to storing a dictionary of photo URLS in my app and manually detecting if the photo still exists when displaying lists of them because there does not seem to be a way to add custom metadata to an ALAsset.
What I would really like to do is add two custom metadata fields to each asset to provide it with a title and a custom id value that I can use to filter on when enumerating the asset library.
As a secondary task, I'd like the user to be able to update the title metadata.
Can it be done? At this point, I really don't think it can because the API really doesn't seem to provide the necessary methods to get/set custom metadata. I'm hoping against all odds that there is some other aspect to the AssetLibrary framework that I have not yet discovered.
At a minimum, if someone can authoritatively say "NO" then at least others might find this breadcrumb on their own trail of hope and change tack more quickly!
And, having 0 reputation I can't tag it with AssetLibrary :( wow, this day is just going downhill. FML
I've been looking over the documentation and I dont think it is possible to tack on additional fields to the ALAsset object, well you can create your own object or extend theirs but that wont help you when your pulling back assets because you'll need to init yours and populate it then.
Look I know this falls short of a really good answer but I had to try.
The ALAsset class documentation describes a property - customMetadata. This is documented to be an NSDictionary of whatever custom tags you want. Currently, however, it is not implemented in the class (I've raised a bug on Apple's developer site to bring the issue up).

Suggest list in google maps search input

We need to create search input field like it is on _http://maps.google.com
The key functionality is suggest list with appropriate results. We
have not found this feature in API.
Analyzing maps.google.com we see that suggest list is received
from get request to this url
https://maps-api-ssl.google.com/maps/suggest?q=%D0%BC%D0%BE%D1%81&cp=...
There are many parameters, including data from search field. This get
request returns our suggest list.
Is there a possibility to use this url in our needs with our data. Or
how can we make it in some other way.
Similar to our needs: _http://cdn.michaelhart.me/mh/instant/maps/
check this out:
http://tech.cibul.net/geocode-with-google-maps-api-v3/
Theoretically you shouldn't use maps-api-ssl.google.com/maps/suggest as it might not be legal. I found this quote from google employee:
'Endpoints like this that are used by Google Maps but not documented as
part of the Maps API should be considered private interfaces.
Consequently use of those end points is a breach of the Terms of
Service. In addition any existing API credentials you may have are
completely unrelated to these end points because they are not served
by API infrastructure'

Resources