How do i interface with Smartthings from a capsule? - bixby

I know how to create Bixby capsules and have seen the viv library, but see no way to interface with Smartthings
Looked through Bixby developer docs
Cant show any

Related

Adaptive cards extension button style

I'm writing custom adaptive card in typescript, and I need stylized buttons. Change font and background color is enough for me.. But I'm not fully understand how to implement custom renderer, can you help me with this?
I found similar post, but I'm not fully understand it, I need more detailed infos
AdaptiveCards - How to customize the color and fonts for Actions on iOS?
thx
i only found similar thread, but don't fully understand it
Just to have a proper answer here, it has been mentioned in other questions before but it is not possible to change the layout of anything AdaptiveCard related if you are not the host, rendering yourself.
The purpose of AdaptiveCards is that the host defines the look and feel so cards always feel as if they're part of the host's UI but you can define the content of the card.
If you want to customize card layout this can only be done if you host a card in your own website, webapp, etc which is totally doable.
In terms of any Microsoft App, Teams, PowerApps etc you can not change the look and feel of AdaptiveCards other than what the cards come with.

Cannot create intent with certain trigger phrases

It seems Google Assistant is unable to handle certain trigger phrases in the intent. The ones I have come across are the following:
Send message to scott
chat with q
Send text to felix
It seems to work fine inside dialogflow simulator. However, it doesn't work at all in Action Console Simulator or on a real device like google home mini. On Action Console Simulator, it gives "You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices" and on a real device it gives an error "I am sorry, i cannot help you .." and exits completely and leaves the device in a funky state. It doesn't seem to trigger any fallback intent. I have tried adding input context but make no difference.
It's very easy to reproduce. Just create a demo action with an intent for the above phrases along with "communicate with penny", invoke your app and then try the above phrases after the welcome message. It will only work if you say "communicate with ..".
Is this a known issue/limitation? Is there a list of phrases that we cannot use to trigger an intent?
The Actions on Google libraries (Node.js, Java) include a limited feature-set that allow third-party developers to build actions for the Google Assistant.
Standard features available in the Google Assistant (like "text Mary 'Hello, world!'") won't be available in your action until you build that feature, using fulfillment.
Rather than looking for a list of phrases you can't use, review the documentation for invocation to see what you can use. Third party actions for the Google Assistant are invoked like:
To learn how to get started building for the Google Assistant, check out Google's codelabs at https://codelabs.developers.google.com/codelabs/actions-1/#0
If you've already reviewed Google's Actions on Google Codelabs through level three, consider updating your question to include a list of your intents and a code sample of your fulfillment, so other Stack Overflow users can understand how your project behaves.
See also, What topics can I ask about here?, and How do I ask a good question?

How can I get intent confidence in the IBM Watson Assistance user interface?

I can get intents confidence via the JSON response object by back-end languages like Node or Python, but I can't get that in the browser-based IBM Watson Assistant user interface. Is there a way to get that?
You can use <? intents ?> to get the information on the intents.
The object is read/writable in dialog.
The Watson Assistant online tool to edit workspaces and the dialog elements has the "Try it out". However, it does not have the capabilities yet to explore the JSON structure returned by the message API.
What I use is this tool which allows to test a conversation, see and edit the context and inspect the confidence levels. There is also another, browser-based, tool you could use. None of them are official.
You can also use network tools on your browser to see the Json of the whole message being passed around.
Typically you right click anywhere and click “inspect element” and then go to the network tab.

Possible to return an image in the Google Actions webhook response?

From the docs it seems like SpeechResponse is the only documented type of response you can return:
https://developers.google.com/actions/reference/conversation#SpeechResponse
Is it be possible to load an image or some other type of media in the assistant conversation via API.AI or the Actions SDK? Seems like this is supported with api.ai for FB, other messengers:
https://docs.api.ai/docs/rich-messages#image
Thanks!
As of today, Google Actions SDK supports Conversation Actions, by building a better Voice UI, which is integrated with Google Home.
Even API.AI integrations with Google Actions can be checked out here, which shows currently no support for images in the response.
When they provide integrations with Google Allo, then in the messaging interface, they might start supporting images, videos etc.
That feature seems to be present now. You can look it up in the docs at https://developers.google.com/actions/assistant/responses
Note: But images would be supported only on devices with a visual output. So Google Home would obviously not be able to do it. But the devices with screen do support a card with an image.
Pro Tip: Yes you can
What you want to do is represent your (image/video) as a URL within API.AI and render the URL as a (image/video) within your app
see working example

Can we develop Google Contextual Gadget?

I am trying to develop a contextual gadget, but not getting any documentation for it. Google provides a document which is very old and not updated from long time. Process explained in that documentation to develop a gadget is deprecated.
Please if anyone have solution, help me.
You may want to check full documentation in Gmail Contextual Gadgets which was last updated June 29, 2016.
To develop a Gmail Contextual Gadgets, you may want to first check the given implementation parts also discussed in the documentation. Then, you may go through these summary of steps:
Use JQuery, or write JavaScript that conforms to ECMAScript 5 Strict Mode.
Note: You need to be using the correct development frameworks to provide an extra layer of protection between your gadget's potential vulnerabilities and your end users. To find out why, see Using the right frameworks for security.
Choose one or more pre-canned extractors. This determines which type of content will trigger your gadget.
Write a manifest for the gadget.
Write the gadget spec. This determines what the gadget will do when it is triggered.
Publish the gadget spec to a location which is accessible on the public Internet. An intranet will not work. Your hard drive will not work. (Why? Google's servers need to download the gadget. If they can't reach it, then Gmail can't display it.)
Install the gadget.
Test the gadget by sending yourself some email. The gadget should appear in Gmail whenever you read an email that contains the right sort of content. For more tips on testing gadgets, see Publishing Your Gadget in the gadgets API site.
It will really help if you go through the documentation as there are best practices, limitations and important details that you should note.
This related SO post might also help.

Resources