Access Dialog flow Context Variables in Vox implant, I have some parameters set in the dialog flow. I want to access those in Vox implant engine - dialogflow-es

My question is how can I access the parameters that are sent by dialog flow over the conversation?
For example,
When live agent hand off is done. I want to transfer to specific phone numbers which will come from dialog flow.
In short, How Can I access a parameter in Vox engine for dialog flow CX integration ?

liveAgentHandoff value is included in the response to Voximplant and your parameters will be available in the metadata field:
https://cloud.google.com/dialogflow/cx/docs/reference/rpc/google.cloud.dialogflow.cx.v3#google.cloud.dialogflow.cx.v3.ResponseMessage.LiveAgentHandoff
Here's the code sample on how to recognize a liveAgentHandoff request in Voximplant scenario
let number;
conversationParticipant.addEventListener(CCAI.Events.Participant.Response, (e) => {
if (e.response.automatedAgentReply?.responseMessages) {
e.response.automatedAgentReply.responseMessages.forEach((response) => {
if (response.liveAgentHandoff) {
transfer = true;
number = response.liveAgentHandoff.metadata.phoneNumber;
Logger.write('###### LiveAgentHandoff being triggered: ' + JSON.stringify(response));
}
})
}
});

Related

How to add Get Started button in the typing bar using bot builder sdk for node.js

I am using bot builder sdk for node.js to create a chatbot. Also connected it to facebook channel. I am using the following code to greet the user:
var bot = new builder.UniversalBot(connector, [
(session, result, next) => {
let text = '';
switch(session.message.address.channelId) {
case 'facebook':
text = 'Hi ' + session.message.user.name + ' !';
break;
default:
text = 'Hi !';
}
session.sendTyping();
session.say(text);
next();
},
(session, say) => {
}
]);
The above code works fine, but I want to add "Get Started" button in the typing bar to invoke the above code. Note that this button appears only once. Please find image of the typing bar below:
Is there a way to achieve this using bot builder sdk for node.js ?
Thanks
Although one can certainly add a button to start any activity with the bot, but that will limit the bots potential to only one customizable channel, i.e. WebChat.
I think there are better 2 alternative ways to get the desired functionality which will work across many channels.
First
I would suggest to add a conversation update event. Code goes in the botbuilder's middleware. Here is a sample code from the docs.
bot.on('conversationUpdate', function (message) {
if (message.membersAdded && message.membersAdded.length > 0) {
// Say hello
var txt = "Send me a Hi";
var reply = new builder.Message()
.address(message.address)
.text(txt);
bot.send(reply);
});
What this will do is make the bot send a message Send me a Hi to the user, if it determines this is a first time visitor. This will give the visitor enough cue to send the bot Hi by typing it. Although he can enter whatever he wants, but this will result in the invocation of the 1st dialog configured which in this case is the will be the dialog which you have posted in question.
Second
You can mark some dialog to be invoked automatically if your bot has never encountered this visitor. Here is the sample code...
var bot = new builder.UniversalBot(connector);
bot.dialog('firstRun', function (session) {
session.userData.firstRun = true;
session.send("Hello...").endDialog();
}).triggerAction({
onFindAction: function (context, callback) {
// Only trigger if we've never seen user before
if (!context.userData.firstRun) {
// Return a score of 1.1 to ensure the first run dialog wins
callback(null, 1.1);
} else {
callback(null, 0.0);
}
}
});
Here we have split the bot creation and dialog registration in 2 steps. And while registering the firstRun dialog, we have provided it the triggerAction that if the visitor is new, then trigger this dialog.
Both of these approaches do not use adding some extra buttons and it is up to the bot either to educate him on sending some message which in turn will start the 1st dialog or directly start some dialog.
For more info on conversationEvent you can refer to this page
I tried the above options, but they didn't seem to be working for facebook messenger. But I found a solution to add the Get Started button into the typing bar of the messenger. For that we need to use the Facebook Graph API and not the bot builder sdk.
https://graph.facebook.com/v2.6/me/messenger_profile?access_token=<PAGE_ACCESS_TOKEN>
{
"get_started":{
"payload":"Get Started"
}
}
The above API call will add the button for you to get the conversation started.
Thanks all for the help!!

args not returning expected LUIS result after implementing BotAuth

I have been creating a chat bot with MS Bot Framework in Nodejs and LUIS. I am recently trying to get certain information from the MS Graph API, and have (sort of) successfully implemented BotAuth and am able to get the information I want.
The issue I am facing now is that for the dialog that implements BotAuth, I am not able to get the usual args that comes with LUIS-intents triggered dialogs. Thus, I am not able to get any entities that the user might have entered. Other dialogs that do not implement BotAuth have no issues with this.
What I am getting now from args is:
{ response: undefined, resumed: 4 }
I am guessing that the issue lies with the [].concat part in this section:
bot.dialog('refreshSchDialog-oauth', [].concat(
ba.authenticate("aadv2"),
(session, args, skip) => {
let user = ba.profile(session, "aadv2");
session.endDialog(user.displayName);
session.userData.accessToken = user.accessToken;
session.userData.refreshToken = user.refreshToken;
console.log('args');
console.log(args);
if (user.accessToken) {
session.send('got leh');
// valid access token, check if luis has any entities (MV name)
// if there is, store conversationData and move to next dialog
if (args.entities) {
for (i = 0; i < args.entities.length; i++) {
if (args.entities[i].type == 'dbName') {
session.conversationData.mvName = args.entities[i].entity;
session.send(args.entities[i].entity);
}
}
}
session.beginDialog('refreshSchDialog');
} else {
// no valid access token
// TODO error message
}
}))
.triggerAction({
matches: 'refreshSchema',
intentThreshold: 0.3
});
May I know why the args is not returning the information from LUIS?
Looking at the BotAuth code it appears that the Auth dialog returns the user if properly authenticated or false if the dialog failed. It doesn’t copy over the args from LUIS. I would change your code so that the first function in your waterfall stores the LUIS data into session.dialogData, then call ba.authenticate and then use both results in your last waterfall step.

GoogleActions Account not linked yet error

I'm trying to implement oauth2 authentication on my nodejs Google Assistant app developed using (DialogFlow or API.ai and google actions).
So I followed this answer. But I'm always getting "It looks like your test oauth account is not linked yet. " error. When I tried to open the url shown on the debug tab, it shows 500 broken url error.
Dialogflow fullfillment
index.js
'use strict';
const functions = require('firebase-functions'); // Cloud Functions for Firebase library
const DialogflowApp = require('actions-on-google').DialogflowApp; // Google Assistant helper library
const googleAssistantRequest = 'google'; // Constant to identify Google Assistant requests
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
console.log('Request headers: ' + JSON.stringify(request.headers));
console.log('Request body: ' + JSON.stringify(request.body));
// An action is a string used to identify what needs to be done in fulfillment
let action = request.body.result.action; // https://dialogflow.com/docs/actions-and-parameters
// Parameters are any entites that Dialogflow has extracted from the request.
const parameters = request.body.result.parameters; // https://dialogflow.com/docs/actions-and-parameters
// Contexts are objects used to track and store conversation state
const inputContexts = request.body.result.contexts; // https://dialogflow.com/docs/contexts
// Get the request source (Google Assistant, Slack, API, etc) and initialize DialogflowApp
const requestSource = (request.body.originalRequest) ? request.body.originalRequest.source : undefined;
const app = new DialogflowApp({request: request, response: response});
// Create handlers for Dialogflow actions as well as a 'default' handler
const actionHandlers = {
// The default welcome intent has been matched, welcome the user (https://dialogflow.com/docs/events#default_welcome_intent)
'input.welcome': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
//+app.getUser().authToken
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
} else {
sendResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
}
},
// The default fallback intent has been matched, try to recover (https://dialogflow.com/docs/intents#fallback_intents)
'input.unknown': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
} else {
sendResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
}
},
// Default handler for unknown or undefined actions
'default': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
let responseToUser = {
//googleRichResponse: googleRichResponse, // Optional, uncomment to enable
//googleOutputContexts: ['weather', 2, { ['city']: 'rome' }], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendGoogleResponse(responseToUser);
} else {
let responseToUser = {
//richResponses: richResponses, // Optional, uncomment to enable
//outputContexts: [{'name': 'weather', 'lifespan': 2, 'parameters': {'city': 'Rome'}}], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendResponse(responseToUser);
}
}
};
// If undefined or unknown action use the default handler
if (!actionHandlers[action]) {
action = 'default';
}
// Run the proper handler function to handle the request from Dialogflow
actionHandlers[action]();
// Function to send correctly formatted Google Assistant responses to Dialogflow which are then sent to the user
function sendGoogleResponse (responseToUser) {
if (typeof responseToUser === 'string') {
app.ask(responseToUser); // Google Assistant response
} else {
// If speech or displayText is defined use it to respond
let googleResponse = app.buildRichResponse().addSimpleResponse({
speech: responseToUser.speech || responseToUser.displayText,
displayText: responseToUser.displayText || responseToUser.speech
});
// Optional: Overwrite previous response with rich response
if (responseToUser.googleRichResponse) {
googleResponse = responseToUser.googleRichResponse;
}
// Optional: add contexts (https://dialogflow.com/docs/contexts)
if (responseToUser.googleOutputContexts) {
app.setContext(...responseToUser.googleOutputContexts);
}
app.ask(googleResponse); // Send response to Dialogflow and Google Assistant
}
}
// Function to send correctly formatted responses to Dialogflow which are then sent to the user
function sendResponse (responseToUser) {
// if the response is a string send it as a response to the user
if (typeof responseToUser === 'string') {
let responseJson = {};
responseJson.speech = responseToUser; // spoken response
responseJson.displayText = responseToUser; // displayed response
response.json(responseJson); // Send response to Dialogflow
} else {
// If the response to the user includes rich responses or contexts send them to Dialogflow
let responseJson = {};
// If speech or displayText is defined, use it to respond (if one isn't defined use the other's value)
responseJson.speech = responseToUser.speech || responseToUser.displayText;
responseJson.displayText = responseToUser.displayText || responseToUser.speech;
// Optional: add rich messages for integrations (https://dialogflow.com/docs/rich-messages)
responseJson.data = responseToUser.richResponses;
// Optional: add contexts (https://dialogflow.com/docs/contexts)
responseJson.contextOut = responseToUser.outputContexts;
response.json(responseJson); // Send response to Dialogflow
}
}
});
// Construct rich response for Google Assistant
const app = new DialogflowApp();
const googleRichResponse = app.buildRichResponse()
.addSimpleResponse('This is the first simple response for Google Assistant')
.addSuggestions(
['Suggestion Chip', 'Another Suggestion Chip'])
// Create a basic card and add it to the rich response
.addBasicCard(app.buildBasicCard(`This is a basic card. Text in a
basic card can include "quotes" and most other unicode characters
including emoji 📱. Basic cards also support some markdown
formatting like *emphasis* or _italics_, **strong** or __bold__,
and ***bold itallic*** or ___strong emphasis___ as well as other things
like line \nbreaks`) // Note the two spaces before '\n' required for a
// line break to be rendered in the card
.setSubtitle('This is a subtitle')
.setTitle('Title: this is a title')
.addButton('This is a button', 'https://assistant.google.com/')
.setImage('https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'Image alternate text'))
.addSimpleResponse({ speech: 'This is another simple response',
displayText: 'This is the another simple response 💁' });
// Rich responses for both Slack and Facebook
const richResponses = {
'slack': {
'text': 'This is a text response for Slack.',
'attachments': [
{
'title': 'Title: this is a title',
'title_link': 'https://assistant.google.com/',
'text': 'This is an attachment. Text in attachments can include \'quotes\' and most other unicode characters including emoji 📱. Attachments also upport line\nbreaks.',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'fallback': 'This is a fallback.'
}
]
},
'facebook': {
'attachment': {
'type': 'template',
'payload': {
'template_type': 'generic',
'elements': [
{
'title': 'Title: this is a title',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'subtitle': 'This is a subtitle',
'default_action': {
'type': 'web_url',
'url': 'https://assistant.google.com/'
},
'buttons': [
{
'type': 'web_url',
'url': 'https://assistant.google.com/',
'title': 'This is a button'
}
]
}
]
}
}
}
};
Actually I deployed the code exists in the dialog flow inline editor. But don't know how to implement an oauth endpoint, whether it should be a separate cloud function or it has to be included within the existsing one. And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
The answer you referenced had an update posted on October 25th indicating they had taken action to prevent you from entering in a google.com endpoint as your auth provider for Account Linking. It seems possible that they may have taken other actions to prevent using Google's auth servers in this way.
If you're using your own auth server, the error 500 would indicate an error on your oauth server, and you should check your oauth server for errors.
Update to answer some of your other questions.
But don't know how to implement an oauth endpoint
Google provides guidance (but not code) on what you need to do for a minimal OAuth service, either using the Implicit Flow or the Authorization Code Flow, and how to test it.
whether it should be a separate cloud function or it has to be included within the existing one
It should be separate - it is even arguable that it must be separate. In both the Implicit Flow and the Authorization Code Flow, you need to provide a URL endpoint where users will be redirected to log into your service. For the Authorization Code Flow, you'll also need an additional webhook that the Assistant will use to exchange tokens.
The function behind these needs to be very very different than what you're doing for the Dialogflow webhook. While someone could probably make a single function that handles all of the different tasks - there is no need to. You'll be providing the OAuth URLs separately.
However, your Dialogflow webhook does have some relationship with your OAuth server. In particular, the tokens that the OAuth server hands to the Assistant will be handed back to the Dialogflow webhook, so Dialogflow needs some way to get the user's information based on that token. There are many ways to do this, but to list just a few:
The token could be a JWT and contain the user information as claims in the body. The Dialogflow webhook should use the public key to verify the token is valid and needs to know the format of the claims.
The OAuth server and the Dialogflow webhook could use a shared account database, and the OAuth server store the token as a key to the user account and delete expired keys. The Dialogflow webhook could then use the token it gets as a key to look up the user.
The OAuth server might have a(nother) webhook where Dialogflow could request user information, passing the key as an Authorization header and getting a reply. (This is what Google does, for example.)
The exact solutions depends on your needs and what resources you have available to you.
And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
Broadly speaking - yes. The details vary (and can change), but don't get too fixated on the details.
If you're using the Assistant on a speaker, you'll be prompted to open the Home app which should be showing a card saying what Action wants permission. Clicking on the card will open a browser or webview to the Actions website to begin the flow.
If you're using the Assistant on a mobile device, it prompts you directly and then opens a browser or webview to the Actions website to begin the flow.
The auth flow basically involves:
Having the user authenticate themselves, if necessary.
Having the user authorize the Assistant to access your resources on the user's behalf.
It then redirects to Google's servers with a one-time code.
Google's servers then take the code... and close the window. That's the extent of what the user's see.
Behind the scenes, Google takes this code and, since you're using the Authorization Code Flow, exchanges it for an auth token and a refresh token at the token exchange URL.
Then, whenever the user uses your Action, it will send an auth token along with the rest of the request to your server.
Plz suggest the necessary package for OAuth2 configuration
That I can't do. For starters - it completely depends on your other resources and requirements. (And this is why StackOverflow doesn't like people asking for suggestions like this.)
There are packages out there (you can search for them) that let you setup an OAuth2 server. I'm sure someone out there provides OAuth-as-a-service, although I don't know any offhand. Finally, as noted above, you can write a minimal OAuth2 server using the guidance from Google.
Trying to create a proxy for Google's OAuth is... probably possible... not as easy as it first seems... likely not as secure as anyone would be happy with... and possibly (but not necessarily, IANAL) a violation of Google's Terms of Service.
can't we store the user's email address by this approach?
Well, you can store whatever you want in the user's account. But this is the user's account for your Action.
You can, for example, access Google APIs on behalf of your user to get their email address or whatever else they have authorized you to do with Google. The user account that you have will likely store the OAuth tokens that you use to access Google's server. But you should logically think of that as separate from the code that the Assistant uses to access your server.
My implementation of a minimal oauth2 server(works for the implicit flow but doesn't store the user session).
taken from https://developers.google.com/identity/protocols/OAuth2UserAgent.
function oauth2SignIn() {
// Google's OAuth 2.0 endpoint for requesting an access token
var oauth2Endpoint = 'https://accounts.google.com/o/oauth2/v2/auth';
// Create element to open OAuth 2.0 endpoint in new window.
var form = document.createElement('form');
form.setAttribute('method', 'GET'); // Send as a GET request.
form.setAttribute('action', oauth2Endpoint);
//Get the state and redirect_uri parameters from the request
var searchParams = new URLSearchParams(window.location.search);
var state = searchParams.get("state");
var redirect_uri = searchParams.get("redirect_uri");
//var client_id = searchParams.get("client_id");
// Parameters to pass to OAuth 2.0 endpoint.
var params = {
'client_id': YOUR_CLIENT_ID,
'redirect_uri': redirect_uri,
'scope': 'email',
'state': state,
'response_type': 'token',
'include_granted_scopes': 'true'
};
// Add form parameters as hidden input values.
for (var p in params) {
var input = document.createElement('input');
input.setAttribute('type', 'hidden');
input.setAttribute('name', p);
input.setAttribute('value', params[p]);
form.appendChild(input);
}
// Add form to page and submit it to open the OAuth 2.0 endpoint.
document.body.appendChild(form);
form.submit();
}
This implementation isn't very secure but it's the only code I've gotten to work as OAuth server for the Assistant.
I am able to make it work after a long time. We have to enable the webhook first and we can see how to enable the webhook in the dialog flow fulfillment docs If we are going to use Google Assistant, then we have to enable the Google Assistant Integration in the integrations first. Then follow the steps mentioned below for the Account Linking in actions on google:-
Go to google cloud console -> APIsand Services -> Credentials -> OAuth 2.0 client IDs -> Web client -> Note the client ID, client secret from there -> Download JSON - from json note down the project id, auth_uri, token_uri -> Authorised Redirect URIs -> White list our app's URL -> in this URL fixed part is https://oauth-redirect.googleusercontent.com/r/ and append the project id in the URL -> Save the changes
Actions on Google -> Account linking setup 1. Grant type = Authorisation code 2. Client info 1. Fill up client id,client secrtet, auth_uri, token_uri 2. Enter the auth uri as https://www.googleapis.com/auth and token_uri as https://www.googleapis.com/token 3. Save and run 4. It will show an error while running on the google assistant, but dont worry 5. Come back to the account linking section in the assistant settings and enter auth_uri as https://accounts.google.com/o/oauth2/auth and token_uri as https://accounts.google.com/o/oauth2/token 6. Put the scopes as https://www.googleapis.com/auth/userinfo.profile and https://www.googleapis.com/auth/userinfo.email and weare good to go. 7. Save the changes.
In the hosting server(heroku)logs, we can see the access token value and through access token, we can get the details regarding the email address.
Append the access token to this link "https://www.googleapis.com/oauth2/v1/userinfo?access_token=" and we can get the required details in the resulting json page.
`accessToken = req.get("originalRequest").get("data").get("user").get("accessToken")
r = requests.get(link)
print("Email Id= " + r.json()["email"])
print("Name= " + r.json()["name"])`

use msal to connect to Azure B2C - state parameter

I am using sample from: https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-singlepageapp as a base to implement B2C signup.
How do I pass the state parameter in the example? I saw there was an issue about the state, so i guess it is possible to use state in the example. But, I can't figure out how to use it and how to retrieve it after token is returned.
I use state in my loginRedirect() method.. so I'll post my code here which should help you enough to make this work. I'm using MSAL in angular but the methods that I call should be the same.
In this example user clicks on a login button which calls a login method:
{
const args: AuthenticationParameters = {
state: "some string" //set state parameter (type: string)
};
this.msalService.loginRedirect(args);
}
This code will then redirect user to login.. and then back to your website (your redirectURI).. On this page you should implement handleRedirectCallback method which will be triggered after user is redirected. In this callback you will also get a response (or error) from login process which will include your state string.
this.msalService.handleRedirectCallback((authError, response) => {
if (response) {
const state = this.msalService.getAccountState(response.accountState);
// you don't need to use "this.msalService.getAccountState()" I think
...
do something with state. I use it to redirect back to initial page url.
...
}
});
In reviewing the source code for MSAL.js, I don't see how you can control the value of state. AuthenticationRequestParameters is not exposed and the value of state is set to a new guid when AuthenticationRequestParameters is constructed.
Example:
In the following code of MSAL.js, we have no control over the authenticationRequest variable.
loginRedirect(scopes? : Array<string> , extraQueryParameters? : string): void {
...
this.authorityInstance.ResolveEndpointsAsync()
.then(() => {
const authenticationRequest = new AuthenticationRequestParameters(this.authorityInstance, this.clientId, scopes, ResponseTypes.id_token, this._redirectUri);
...
});
...
}
You can send the state parameter on the loginRequest:
const loginRequest = {
...
scopes: "your scopes",
state: "my custom state value"
}
Then you capture it in the response on accountState, like this:
clientApplication.loginPopup(loginRequest).then(function (loginResponse) {
if (loginResponse.account) {
// will print: my custom state value
console.log(loginResponse.accountState);
....
}
});
You can find the documentation here: https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-js-pass-custom-state-authentication-request

Windows Azure node.js Push notification for Windows store 8.1 - How to use 'createRawTemplateRegistration' template?

Please explain with one example as I am getting Error: 400 - The specified resource description is invalid.
Basically, I want to update badge value. But there is no template for badge registration in WnsService API document (http://azure.github.io/azure-sdk-for-node/azure-sb/latest/WnsService.html). So, I am trying with "createRawTemplateRegistration" template to update the badge value.
Please help me on this.
You can directly use the function sendBadge() to push badge value to client devices.
Please try the following code:
var azure = require('azure');
var notificationHubService = azure.createNotificationHubService('<hubname>', '<connectionstring>');
notificationHubService.wns.sendBadge(null,99,function(error,response){
if(error) console.log(error);
console.log(response);
})
Any further concern, please feel free to let me know.
update
Do you mean that you want only one template and to handle all the types of notifications including Raw, Toast, Badge? If so, I think the answer is negative. According the description http://azure.github.io/azure-sdk-for-node/azure-sb/latest/WnsService.html#createRawTemplateRegistration:
Remember that you have to specify the X-WNS-Type header
So the header option is required. And according the REST API which is invoked via this api in nodejs is Create Registration, and we can find the description:
The BodyTemplate element is mandatory, as is the X-WNS-Type header.
So we should specify the notification type for the template.
update1
This code sample works fine on my side:
var channel = '<devicetoken>';
var templateMessage = { text1: '$(message)' };
notificationHubService.wns.createRawTemplateRegistration(channel,'tag',JSON.stringify(templateMessage), {headers: { 'X-WNS-Type': 'wns/raw' }},
function (e, r) {
if (e) {
console.log(e);
} else {
console.log({
id: r.RegistrationId,
deviceToken: r.DeviceToken,
expires: r.ExpirationTime
});
}
}
)

Resources