Relatively new to AWS Lambda, and I'm trying to integrate Twilio Programmable Voice into a Lambda function. The code in Function is the following:
'use strict';
module.exports.hello = async event => {
console.info("Program Started");
const accountSid = 'AAAAAA';
const authToken = 'BBBBBB';
const client = require('twilio')(accountSid, authToken);
client.calls
.create({
twiml: '<Response><Say>Ahoy, World!</Say></Response>',
to: '+1XXXXXXXXXX',
from: '+1YYYYYYYYY'
})
.then(call => console.log(call.sid));
console.info("Program Ended");
};
The accountSid and authToken are correct in the implementation. Twilio is inside of a Layer and the test is able to find the dependency. The logging shows both "Program Started" and "Program Ended", so the code is being called. But there is no actual call when testing. Any suggestions??
You are not returning promise from your function so there is no way for lambda to identify if your execution has completed. The last line which is console is being executed before client.calls finishes the execution as that is asynchronous. You have two choices here
Either change it to return the promise like this
'use strict';
module.exports.hello = async event => {
console.info("Program Started");
const accountSid = 'AAAAAA';
const authToken = 'BBBBBB';
const client = require('twilio')(accountSid, authToken);
return client.calls
.create({
twiml: '<Response><Say>Ahoy, World!</Say></Response>',
to: '+1XXXXXXXXXX',
from: '+1YYYYYYYYY'
})
.then(call => console.log(call.sid))
.then(() => console.info("Program Ended"));
};
OR change it to use await style
'use strict';
module.exports.hello = async event => {
console.info("Program Started");
const accountSid = 'AAAAAA';
const authToken = 'BBBBBB';
const client = require('twilio')(accountSid, authToken);
const call = await client.calls
.create({
twiml: '<Response><Say>Ahoy, World!</Say></Response>',
to: '+1XXXXXXXXXX',
from: '+1YYYYYYYYY'
});
console.log(call.sid);
console.info("Program Ended");
};
Related
I want to send a WhatsApp message to a particular number when a Firebase Realtime Database is triggered. But unable to implement the code properly -
const accountSid = 'AC565656563389214ace8531';
const authToken = '[AuthToken]';
const client = require('twilio')(accountSid, authToken);
client.messages
.create({
body: 'Your appointment is coming up on July 21 at 3PM,
from: 'whatsapp:+1415454545386',
to: 'whatsapp:+9196456454566'
})
.then(message => console.log(message.sid))
.done();
I want to use this code in Firebase function for the trigger in the Realtime database.
The following should do the trick (untested):
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
// Use one of the options detailed in the doc (https://firebase.google.com/docs/functions/config-env)
// for storing the two following secrets
const accountSid = ...
const authToken = ...
const client = require('twilio')(accountSid, authToken);
exports.sendMessage = functions.database.ref('/.../{pushId}') // Adapt the path
.onCreate((snapshot, context) => { // Adapt the trigger? E.g. OnWrite?
return client.messages
.create({
body: 'Your appointment is coming up on July 21 at 3PM',
from: 'whatsapp:+1415454545386',
to: 'whatsapp:+9196456454566'
})
.then(message => {
console.log(message.sid);
return null;
});
});
I want to send text from my client (Angular v.12) to the backend through REST API so I'll get the audio back, then in the client use it with new Audio(...) and be able to play the sound on user click.
My backend looks like this:
const express = require("express");
const cors = require("cors");
const textToSpeech = require('#google-cloud/text-to-speech');
const stream = require("stream");
const app = express();
app.get('/api/tts', async (req, res) => {
const txt = req.query.txt
console.log('txt', txt);
const client = new textToSpeech.TextToSpeechClient();
const request = {
input: {text: txt},
voice: {languageCode: 'en-US', ssmlGender: 'NEUTRAL'},
audioConfig: {audioEncoding: 'MP3'},
};
const [response] = await client.synthesizeSpeech(request);
const readStream = new stream.PassThrough();
readStream.end(response.audioContent);
res.set("Content-disposition", 'attachment; filename=' + 'audio.mp3');
res.set("Content-Type", "audio/mpeg");
readStream.pipe(res);
})
Now in my client I just created a button to test, and on click I send an HTTP request like so:
public textToSpeech(txt: string) {
let httpParams: HttpParams = new HttpParams()
.set('txt', txt)
return this.http.get('//localhost:3030/api/tts', { params: httpParams, responseType: 'text' })
}
I do get a 200 OK code and a long string as a response.
In my component:
onButtonClick() {
this.speechService.textToSpeech('testing')
.subscribe(res => {
this.audio = new Audio(res)
this.audio.play()
})
}
but I get the following errors:
GET http://localhost:4200/��D�
Uncaught (in promise) DOMException: The media resource indicated by the src attribute or assigned media provider object was not suitable.
Okay, so I solved it with a different approach.
On the backend, I use fs to write and create an MP3 file to the public folder, and then on the frontend, I put the link to the file as the source like so:
Backend:
app.get('/api/tts', async (req, res) => {
const {text} = req.query
const client = new textToSpeech.TextToSpeechClient();
const request = {
input: {text},
voice: {languageCode: 'en-US', ssmlGender: 'FEMALE'},
audioConfig: {audioEncoding: 'MP3'},
};
const [response] = await client.synthesizeSpeech(request);
const writeFile = util.promisify(fs.writeFile);
await writeFile(`./public/audio/${text}.mp3`, response.audioContent, 'binary');
res.end()
})
Frontend:
onButtonClick() {
this.speechService.textToSpeech('hello')
.subscribe(res => {
this.audio = new Audio(`//localhost:3030/audio/hello.mp3`)
this.audio.play()
})
}
It's hardcoded right now, but I'm going to make it dynamic, just wanted to test.
I don't know if this is the best approach but I got it to work the way I wanted.
Entrypoint of my server is publisher.js. Its code is the following:
'use strict';
const publishingRoute = require('./routes/enqueue');
. . . .
. . . .
const PublishingServicePort = process.env.PUB_PORT || 4001;
const RabbitMQHostURL = process.env.RABBITMQ_AMQP_URL;
const RabbitMQQueueName = process.env.QUEUE;
const RabbitMQRoutingKey = process.env.ROUTING_KEY;
const RabbitMQExchange = process.env.EXCHANGE;
let rbmqChannel;
let rbmqConnection;
const app = express();
app.use(bodyParser.json());
//initConnection() creates the TCP connection and confirm channel with the Broker. No error here
(async function initConnection() {
rbmqConnection = await rbmqConnectionManager.connect(amqp, RabbitMQHostURL);
rbmqChannel = await rbmqConnectionManager.createConfirmChannel(rbmqConnection);
})();
//Error Here: None of the parameter is passed to the callback function
app.use('/enqueue', publishingRoute.enqueueUser(rbmqChannel, RabbitMQQueueName, RabbitMQRoutingKey, RabbitMQExchange));
app.listen(PublishingServerPort , async () => {
console.log(`Server listening on ${PublishingServerPort }`);
})
../routes/ contains modules, that handles the routes from the publisher.js. One of the routes's module is the enqueue.js. It enqueues JabberID of the users (JSON) into the RabbitMQ queue, through the Exchange. The following code are of enqueue.js module.
'use strict';
const router = require('express').Router();
const PublisherService = require('../services/rbmq-publisher');
export function enqueueUser(amqpChannel, queueName, routingKey, exchange) {
//Error: No parameter has reached here. amqpChannel, queueName, routingKey, and exchange are undefined. I need all of them here.
router.post('/', async (req, res) => {
const content = req.body.payload;
await PublisherService.PublishPayload(amqpChannel, exchange, routingKey, content);
res.status(200);
res.send({
"Message": "OK",
"payloadEnqueued": content
});
});
return router;
}
rbmq-publisher.js is the controller (module) for handling publishing/enqueuing into the RabbitMQ Exchange. The following code is of the rbmq-publisher.js
'use strict';
const PublishPayload = async function(channel, exchange, routingKey, content) {
//Error: None of the parameter has reached here. All of them are undefined.
console.info(`[AMQP] : Publishing Message ...`);
try {
await channel.publish(exchange, routingKey, Buffer.from(JSON.stringify(content)));
console.log(`[AMQP SUCCESS]: JabberID Published`);
}
catch(exception) {
//This is what is being shown in the console window. I have written the error below this code snippet
console.error(`[AMQP ERROR]: ${exception}`);
}
};
export {
PublishPayload
};
Console windows shows the following error:
[AMQP ERROR]: TypeError: Cannot read property 'publish' of undefined
None of the arguments from the app.use() has reached the callback function. Where did I go wrong?
im having some issues stubbing this dependency. I know there ir a aws-sdk-mock modules but mi goal its to stub it with sinon and chai.
Here is mi code,
Test code
const chai = require('chai');
const sinon = require('sinon');
const chaiHttp = require('chai-http');
const app= require('./app');
chai.use(chaiHttp);
const queryMock =sinon.stub();
const dynamoMock = {
DocumentClient:sinon.fake.returns({
query:queryMock
})
}
let awsDynamoMock;
describe.only('Integration test for activation',()=>{
beforeEach(() => {
awsDynamoMock = sinon.stub(require('aws-sdk'),'DynamoDB');
awsDynamoMock.returns(dynamoMock);
})
afterEach(() => {
awsDynamoMock.restore();
})
it('Request /endpoint returns HTTP 200 with {} when user exist and all task are done',(done)=>{
const params = {
TableName:'table',
KeyConditionExpression: `client_id= :i`,
ExpressionAttributeValues: {
':i': '23424234'
},
ConsistentRead:true
};
const userWithNoPendingsMock = {
Items: [
{
client_id: "23424234",
},
],
Count: 1,
ScannedCount: 1,
}
queryMock.withArgs(params).returns({
promise:() =>sinon.fake.resolves(userWithNoPendingsMock)
})
chai
.request(app)
.post("/endpoint")
.end( (err, res) => {
chai.expect(res.status).to.be.eql(200);
chai.expect(res.body).to.eql({});
done();
});
});
})
Connection to dynamoDB to stub
const AWS = require('aws-sdk');
AWS.config.update({region:'REGION'});
let docClient = false;
const getDynamoSingleton = async () => {
if (docClient) return docClient;
docClient = new AWS.DynamoDB.DocumentClient();
console.log(docClient)
return docClient
}
module.exports = getDynamoSingleton
Using DynamoDB example
const getElementById = async (TableName,key,id)=>{
const docClient = await getDynamoSingleton();
//Make query params.
const params = {
TableName,
KeyConditionExpression: `${key} = :i`,
ExpressionAttributeValues: {
':i': id
},
ConsistentRead:true
};
//Run query as promise.
return docClient.query(params).promise();
}
Im really stuck on this problem, so any help would be useful. I know the problem has something to do with de documentclient
Thanks for the help
I realize this is an old question, but you can set up a resolvable object with a little trickery. Some inspiration from this answer.
const sandbox = require('sinon').createSandbox();
const AWS = require('aws-sdk');
describe('...', () => {
it('...',
(done) => {
// Create a dummy resolver, which returns an empty object.
const dummy = {func: () => {}};
sandbox.stub(dummy, 'func').resolves({some: 'fake response'});
// Mock docClient.query. Binding to .prototype should make this apply to any `new AWS.DynamoDB.DocumentClient()` calls.
sandbox.stub(AWS.DynamoDB.DocumentClient.prototype, 'query').returns({promise: dummy.func});
// Run your tests here.
});
});
This is cut down to remove a lot of the extra configuration you are doing (and probably need). We create a dummy object with the function func which returns a sinon promise.
Next, we stub the AWS.DynamoDB.DocumentClient prototype so that new AWS.DynamoDB.DocumentClient() will receive our sinon stub.
Third, we configure our DocumentClient prototype stub to return a plain javascript object with a property called promise. This property points to the first dummy object's promise-returning func method.
Now calls to docClient.query(params).promise() should receive a mocked promise. docClient.query(params) will receive the stub sandbox.stub(AWS.DynamoDB.DocumentClient.prototype, ...). And .promise() will be processed from {promise: dummy.func} to refer to the dummy resolver.
Here is I'm trying to achieve
if user is exist in firestore
show the data
else
add it to firestore
And following is my code
// See https://github.com/dialogflow/dialogflow-fulfillment-nodejs
// for Dialogflow fulfillment library docs, samples, and to report issues
'use strict';
const functions = require('firebase-functions');
const {WebhookClient} = require('dialogflow-fulfillment');
const admin = require('firebase-admin');
admin.initializeApp({
credential: admin.credential.applicationDefault()
});
var db = admin.firestore();
const settings = {timestampsInSnapshots: true};
db.settings(settings);
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
const agent = new WebhookClient({ request, response });
function save(agent) {
const usersRef = db.collection('users').doc('someid');
usersRef.get().then(function(doc) {
if(doc.exists) {
let existingUser = doc.data();
console.log("Document is already exists " + existingUser.userName);
agent.add('Hello ');
} else {
console.log("Document creation is started");
usersRef.set({
userName : 'somename'
});
agent.add('Welcome ');
}
}).catch(function(error) {
console.error("Error writing document: ", error);
agent.add('Failed to login!');
});
}
let intentMap = new Map();
intentMap.set('dialogflow-intent-name',save);
agent.handleRequest(intentMap);
});
But the execution of above code it starts the cloud function and terminated first and my chatbot doesn't get any response but after execution log is like
Function execution started
Function execution took 404 ms, finished
with status code: 200
"Document is already exists someusername"
DocumentReference.set returns a promise, and you are not waiting for it to finish. So it should work if you change your code from:
usersRef.set({
userName : 'somename'
});
... rest of the code
to
usersRef.set({
userName : 'somename'
}).then(result => {
... rest of the code
})