is there any function to act as a magnet trigger in google cardboard api - google-cardboard

I looked into the CardBoard SDK for Android but I have not found any function which can generate a magnet trigger. But I could find bluetooth devices which can act as a magnet trigger - it is likely using some API to able to do that. So Is it possible to write such app which can generate magnet trigger? Thanks in advance.

I think the solution is to write an accessibility service which receives the bluetooth events and generate a TYPE_VIEW_CLICKED event

Related

Mouse control using Twilio's screen share session

Here is my situation. I have used Twilio on my portal for "creating s meeting + chat + screen share" and want to add hand-over mouse control as in join.me, zoom, teamviewer. Is there any api, sdk or any way to achieve that while using Twilio for screen-sharing, as I have already paid for Twilio I can not opt for other integrations for meetings. Or is it possible to leverage the facility of any other application with Twilio.
Thanks
The nearest thing is to make use of the Twilio Programmable Video Data Tracks API.
Announcing Programmable Video DataTracks API

Is it possible to make a receiver app for chrome tab

I'm posting here because I didn't find any satisfying answer anywhere.
The question is quiet simple. I see a lot of application implementing the cast feature on Android. The issue is that even if I have a brand new smart TV, it actually doesn't support the cast feature of the majority of my apps.
For example, my TV has a Youtube app so I can cast youtube videos from the youtube app on my phone to my TV.
Now I would like to cast my favorite streaming app to my TV but my TV is not found. So I'm thinking, okay let's try to make an app for my TV that will receive that kind of command.
I know that I can make an app for my TV. Before starting that ambitious project, I want to be sure that the google cast sdk will allow me to write such receiver app.
Do you think this is possible ? Or do we really need one receiver app for every emitter app ?
YouTube uses its own discovery protocol beyond what the Cast SDK supports. Apps need to integrate the Cast SDK in their senders and implement receivers that support their authentication and DRM.

Launching app from Google assistant via Dialogflow

I'm using Dialogflow to interact with my users (they can ask questions, ask to receive reports etc...) and I would like to launch an Android application when they invoke one the intents I created, is there a way to that?
Short answer: Not really.
Longer answer: While you can't have one of your Actions trigger any Android Intent directly, you do have a few options to strongly suggest to a user that they do so. For example:
You can use something like Firebase Cloud Messaging (FCM) to trigger a notification/event.
If you're relying on the screen of the Android device, you can send a card that includes a URL, and that URL can deep link into your application if you've configured it.

Creating action on google | Audio playback

i'm new to action on Googles and right now doing R&D. I've created an audio skill on Alexa, and now want same for Google assistant as well. But i've few questions:
1- Can we return audio in response? my audios are about 1hour long, so can we play them in our action? In Alexa, we have audio player. Anything like that in assistant?
2- I didn't find any SDK, but devs are talking about it, so there must be some. Kindly share the link.
Thanks in anticipation.
Update:
I believe, SDK is actions-on-google. I've not explored it yet, but it's the SDK that i found for creating actions with node js
Link: actions-on-google
Actions support SSML which provides the playback of audio files: https://developers.google.com/actions/reference/ssml#support_for_ssml_elements
At the moment there is a 120 seconds maximum duration for all the audio formats supported, but you can break up the audio and play them in sequence if they are longer.
If you have your own NLU, you can use the Actions SDK. If you don't have your own NLU, then you can use API.AI to create an action.
A node.js client library is available for either of these options: https://github.com/actions-on-google/actions-on-google-nodejs
For any other developer questions, you should look at the actions documentation: https://developers.google.com/actions/develop/conversation

Possible to return an image in the Google Actions webhook response?

From the docs it seems like SpeechResponse is the only documented type of response you can return:
https://developers.google.com/actions/reference/conversation#SpeechResponse
Is it be possible to load an image or some other type of media in the assistant conversation via API.AI or the Actions SDK? Seems like this is supported with api.ai for FB, other messengers:
https://docs.api.ai/docs/rich-messages#image
Thanks!
As of today, Google Actions SDK supports Conversation Actions, by building a better Voice UI, which is integrated with Google Home.
Even API.AI integrations with Google Actions can be checked out here, which shows currently no support for images in the response.
When they provide integrations with Google Allo, then in the messaging interface, they might start supporting images, videos etc.
That feature seems to be present now. You can look it up in the docs at https://developers.google.com/actions/assistant/responses
Note: But images would be supported only on devices with a visual output. So Google Home would obviously not be able to do it. But the devices with screen do support a card with an image.
Pro Tip: Yes you can
What you want to do is represent your (image/video) as a URL within API.AI and render the URL as a (image/video) within your app
see working example

Resources