Using the watson-developer-cloud node SDK directly on the client? - node.js

I have a client based on react and I bundle it with webpack 2. But the moment I import/require const SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1'); I got some trouble. After I fixed it that it does not break the build, it still throws some warning like:
Module not found: Error: Can't resolve '../build/Release/validation' in '/Users/denbox/Desktop/schedulebot/web-interface/node_modules/websocket/lib'
# ./~/websocket/lib/Validation.js 9:21-59
# ./~/websocket/lib/WebSocketConnection.js
# ./~/websocket/lib/websocket.js
# ./~/websocket/index.js
# ./~/watson-developer-cloud/speech-to-text/recognize_stream.js
# ./~/watson-developer-cloud/speech-to-text/v1.js
# ./src/components/chat.jsx
# ./src/components/chat-page.js
# ./src/index.js
# multi (webpack)-dev-server/client?http://localhost:8080 ./src/index.js
Is it even possible to use the watson-developer-cloud node sdk for the speech-to-text service on the client or only directly on the nodejs server? Thank you.

The Watson Node.js SDK has growing compatibility for client-side usage, but it's not all the way there yet. However, for speech services, there is a separate SDK targeted at client-side usage: https://www.npmjs.com/package/watson-speech
I just added a Webpack example and confirmed that it works: https://github.com/watson-developer-cloud/speech-javascript-sdk/blob/master/examples/webpack.config.js
Update: I also went and added a Webpack example to the Node.js SDK - with the configuration there, it can build for the entire library, and actually works for a subset set of the modules as documented: https://github.com/watson-developer-cloud/node-sdk/tree/master/examples/webpack

Only in Node,js. The mechanism for using Speech To Text from the browser is to use websockets, but to do that you need a token, which will require a server side request. Once you have the token you can use the websockets interface.

With the answers above found a solution for my problem and it might help others who want to get started with the API:
import axios from 'axios';
import recognizeMicrophone from 'watson-speech/speech-to-text/recognize-microphone';
axios.get(`${BACKEND_ROOT_URL}/watsoncloud/stt/token`)
.then((res) => {
console.log('res:', res.data);
const stream = recognizeMicrophone({
token: res.data.token,
continuous: false, // false = automatically stop transcription the first time a pause is detected
});
stream.setEncoding('utf8');
stream.on('error', (err) => {
console.log(err);
});
stream.on('data', (msg) => {
console.log('message:', msg);
});
})
.catch((err) => {
console.log(`The following gUM error occured: ${err}`);
});
In the backend I create a proxy service that get's a token for the watson speech to text service so I don't have to save my credentials on the client:
const watson = require('watson-developer-cloud');
const express = require('express');
const cors = require('cors');
app.use(cors());
const stt = new watson.SpeechToTextV1({
// if left undefined, username and password to fall back to the SPEECH_TO_TEXT_USERNAME and
// SPEECH_TO_TEXT_PASSWORD environment properties, and then to VCAP_SERVICES (on Bluemix)
username: process.env.STT_SERVICE_USER,
password: process.env.STT_SERVICE_PW,
});
const authService = new watson.AuthorizationV1(stt.getCredentials());
// Endpoint to retrieve an watson speech to text api token
// Get token using your credentials
app.get('/watsoncloud/stt/token', (req, res, next) => {
// TODO check jwt at the auth service
authService.getToken((err, token) => {
if (err) {
next(err);
} else {
res.send({ token });
}
});
});
app.listen(port, (err) => {
if (err) {
console.log(`Error: ${err}`);
}
});

Related

PayFast integration in NodeJS / ReactJS

I am trying to integrate PayFast into my React / NodeJS app. Using Express, my NodeJS successfully retrieves a payment uuid from the PayFast endpoint (I see this uuid in my console log) -
app.get("/api", async (req, res) => {
paymentData["signature"] = generateSignature(paymentData, phrase);
console.log(paymentData["signature"])
const str = dataToString(paymentData)
const id = await getPaymentId(str)
res.json({uuid: id})
})
However, in my front end (ReactJS) I am getting an undefined response & possible CORS issue from my backend API end point when trying to retrieve this uuid -
My custom fetch hook:
export default function useFetch(baseUrl) {
const [loading, setLoading] = useState(true);
function get() {
return new Promise((resolve, reject) => {
fetch(baseUrl)
.then(res => {
console.log(res)
res.json()
})
.then(data => {
console.log(data);
if (!data) {
setLoading(false);
return reject(data);
}
setLoading(false);
resolve(data);
})
.catch(error => {
setLoading(false);
reject(error);
});
});
}
return { get, loading };
};
The error:
Response {type: 'cors', url: 'http://localhost:3001/api', redirected: false, status: 200, ok: true, …}
undefined
If I test my NodeJS end point from my browser, it successfully comes back with my payment uuid. Any one have any ideas why my React app is acting up?
Update your CORS config to accept connections from the React app host.
app.use(cors({
origin: 'http://localhost:3000',
}));
Open package.json of your react app and add a line on the bottom of the json file:
"proxy":"http://localhost:3001"
3001 is the PORT that your Node http server is running on locally, if it's another PORT just change it accordingly.
This will redirect all http traffic from your webpack dev server running on PORT 3000, to your Node server running on 3001.
For those others who might encounter a similar type of an issue, I have attached a blog post with the method that I have used to solve the CORS issue, as well as integrate with the PayFast API.
https://codersconcepts.blogspot.com/2022/04/nodejs-payfast-integration.html

How to dynamically add CORS sites to Google Cloud App Engine Node API

I am new to API deployment.
I have a Node Express API that has CORS enabled in the root app.js, for the API and a socket.io implementation:
var app = express();
app.use(cors({
origin : ["http://localhost:8080", "http://localhost:8081"],
credentials: true
}))
and
const httpServer = createServer(app);
const io = new Server(httpServer, {
cors: {
origin: ["http://localhost:8080", "http://localhost:8081"],
credentials: true,
methods: ["GET"]
}
});
I will set up a sales website that allows a customer to pay for a license to use the API with their site, i.e. https://www.customersite.com
My question is how can I dynamically add the customer's website (say after they submit a form from another site) to the CORS list? Ideally it would be via an API call. The only option which I can think of (that is not automated) is to manually maintain a global js file (i.e. config.js) with the cors list from within the Google platform using the file explorer / editor, and to iterate over it as an array similar to process.env.customerList. This will not work for me as I need to have this step happen automatically.
Any and all suggestions are appreciated.
Solution: Use a process manager like pm2 to 'reload' the API gracefully with close to no downtime.
PM2 reloads can be triggered programmatically. I made a PUT endpoint for modifying CORS list /cors/modify that sent a programmatic pm2 message when a successful modification was done.
Note: on Windows OS must use programmatic messaging:
pm2.list(function(err, list) {
pm2.sendDataToProcessId(list[0].pm2_env.pm_id,
{
type : 'process:msg',
data : {
msg : 'shutdown'
},
topic: true
},
function(err, res) {
console.log(err);
pm2.disconnect(); // Disconnects from PM2
}
);
if (err) {
console.log(err);
pm2.disconnect(); // Disconnects from PM2
}
});
which can then be caught with
process.on('message', async function(msg) {
if (msg == "shutdown" || msg.data.msg == 'shutdown') {
console.log("Disconnecting from DB...");
mongoose.disconnect((e => {
if (e) {
process.exit(1)
} else {
console.log("Mongoose connection removed");
httpServer.close((err) => {
if (err) {
console.error(err)
process.exit(1)
}
process.exit(0);
})
}
}));
}
});

How to request apikey from backend API using node js?

I was learning to build a weather app using Node (Express) + React. I successfully fetched weather data from open weather API.
However I was directly using the open weather API key in my React app like this const weatherURL = 'http://api.openweathermap.org/data/2.5/weather?q=london,uk&APPID=1234567qwerty';. Obviously this is not safe as it exposed the API key to the client. I thought about storing the API key in .env file, but according to [this answer][1], I should never store API key in .env file or .gitignore. The right way is to make a request to backend API and make an API call to backend and send the data back. I could not find out how to do it. Can anyone help?
Following is my node js code:
const express = require('express');
const cors = require('cors');
const app = express();
const SELECT_ALL_QUERY = 'SELECT * FROM `mySchema`.`myTable`;';
app.use(cors());
app.get('/', (req, res) => {
res.send('go to /myTable to see content')
});
const pool = require('./awsPool');
pool.getConnection((err, connection) => {
if (err) {
return console.log('ERROR! ', err);
}
if(!connection) {
return console.log('No connection was found');
}
app.get('/myTable', (req, res) => {
console.log(connection);
connection.query(SELECT_ALL_QUERY, (err, results) => {
if (err) {
return res.send(err)
}
else {
return res.json({
data: results
})
};
});
});
});
let port=process.env.PORT||4000;
app.listen(port, () => {
console.log(`App running on port ${port} `);
});```
[1]: https://stackoverflow.com/a/57103663/8720421
What the linked answer was suggesting is to create a route in your Node/Express backend API that will make the call to the weather API for you, instead of the front end. This way the request and your API key are not public-facing whenever your front end makes a call.
The method for doing this would essentially be the same as what you have done in React, making an HTTP request using a built-in or 3rd party library. This resource I just found has some information on how to do both.
The simplest pure http-request in node looks like this:
const http = require('http')
const url = 'http://api.openweathermap.org/data/'
http.request(url, callback).end()
function callback (weatherResponse) {
let jsonString = ''
weatherResponse.on('data', chunk => {
jsonString += chunk
})
weatherResponse.on('end', () => {
// Now you have the complete response and can do whatever you want with it
// like return it to your user `res.send(jsonString)`
console.log(jsonString)
})
}
Many people find it bulky to having to handle chunks and the whole asynchronous thing, so there are many popular npm modules, like: https://www.npmjs.com/package/axios. (And here's a list of other contenders https://github.com/request/request/issues/3143).
Also, it is normal to store API-keys in environment variables on the backend. It makes things easy if you ever try to dockerize your app, or just scale up to using two backend servers instead of one.
I found a solution based on #ippi answer, add the following part to the original code:
const request = require('request');
const url = 'http://api.openweathermap.org/data/2.5/weather?q=london,uk&APPID=1234567';
app.get('/weather', (req, res) => {
request(url, (error, response, body) => {
if (!error && response.statusCode == 200) {
var info = JSON.parse(body)
res.send(info);
}
})
})
The url can be stored in .env file and passed into the above code. The returned weather data can be viewed in JSON format at http://localhost:4000/weather. In React the weather data can be fetched via this localhost url.
EDIT: request is deprecated, so here is a solution using axios
app.get('/weather', (req, res) => {
axios.get(url)
.then(response => {res.json(response.data)})
.catch(error => {
console.log(error);
});
})
User Passport middleware for nodeJs/Express. They provide passport-headerapikey strategy using which you can create and authorize apiKeys. http://www.passportjs.org/packages/passport-headerapikey/

This is a general expressjs running on node.js inside a docker container and on the cloud question

I have built two docker images. One with nginx that serves my angular web app and another with node.js that serves a basic express app. I have tried to access the express app from my browser in two different tabs at the same time.
In one tab the angular dev server (ng serve) serves up the web page. In the other tab the docker nginx container serves up the web page.
While accessing the node.js express app at the same time from both tabs the data starts to mix and mingle and the results returned to both tabs are a mix mash of the two requests (one from each browser tab)...
I'll try and make this more simple by showing my express app code here...but to answer this question you may not even need to know what the code is at all...so maybe check the question as stated below the code first.
'use strict';
/***********************************
GOOGLE GMAIL AND OAUTH SETUP
***********************************/
const fs = require('fs');
const {google} = require('googleapis');
const gmail = google.gmail('v1');
const clientSecretJson = JSON.parse(fs.readFileSync('./client_secret.json'));
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
/***********************************
EXPRESS WITH CORS SETUP
***********************************/
const PORT = 8000;
const HOST = '0.0.0.0';
const express = require('express');
const cors = require('cors');
const cookieParser = require('cookie-parser');
const bodyParser = require('body-parser');
const whiteList = [
'http://localhost:4200',
'http://localhost:80',
'http://localhost',
];
const googleApi = express();
googleApi.use(
cors({
origin: whiteList
}),
cookieParser(),
bodyParser()
);
function getPageOfThreads(pageToken, userId, labelIds) {
return new Promise((resolve, reject) => {
gmail.users.threads.list(
{
'auth': oauth2Client,
'userId': userId,
'labelIds': labelIds,
'pageToken': pageToken
},
(error, response) => {
if (error) {
console.error(error);
reject(error);
}
resolve(response.data);
}
)
});
}
async function getPages(nextPageToken, userId, labelIds, result) {
while (nextPageToken) {
let pageOfThreads = await getPageOfThreads(nextPageToken, userId, labelIds);
console.log(pageOfThreads.nextPageToken);
pageOfThreads.threads.forEach((thread) => {
result = result.concat(thread.id);
})
nextPageToken = pageOfThreads.nextPageToken;
}
return result;
}
googleApi.post('/threads', (req, res) => {
console.log(req.body);
let threadIds = [];
oauth2Client.credentials = req.body.token;
let getAllThreadIds = new Promise((resolve, reject) => {
gmail.users.threads.list(
{ 'auth': oauth2Client, 'userId': 'me', 'maxResults': 500 },
(err, response) => {
if (err) {
console.error(err)
reject(err);
}
if (response.data.threads) {
response.data.threads.forEach((thread) => {
threadIds = threadIds.concat(thread.id);
});
}
if (response.data.nextPageToken) {
getPages(response.data.nextPageToken, 'me', ['INBOX'], threadIds).then(result => {
resolve(result);
}).catch((err) => {
console.error(err);
reject(err);
});
} else {
resolve(threadIds);
}
}
);
});
getAllThreadIds
.then((result) => {
res.send({ threadIds: result });
})
.catch((error) => {
res.status(500).send({ error: 'Request failed with error: ' + error })
});
});
googleApi.get('/', (req, res) => res.send('Hello World!'))
googleApi.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
The angular app makes a simple request to the express app and waits for the reply...which it properly receives...but when I try to make two requests at the exact same time data starts to get mixed together and results are given back to each browser tab from different accounts...
...and the question is... When running containers in the cloud is this kind of thing an issue? Does one need to spin up a new container for each client that wants to actively connect to the express service so that their data doesn't get mixed?
...or is this an issue I am seeing because the express app is being accessed from locally inside my machine? If two machines with two different ip address tried to access this express server at the same time would this sort of data mixing still be an issue or would each get back it's own set of results?
Is this why people use CaaS instead of IaaS solutions?
FYI: this is demo code and the data will not be actually going back to the consumer directly...plans are to have it placed into a database and then re-extracted from the database to download all of the metadata headers for each email.
-Thank you for your time
I can only clear up a small part of this question:
When running containers in the cloud is this kind of thing an issue?
No. Docker is not causing any of the quirky behaviour that you are describing.
Does one need to spin up a new container for each client?
A docker container generally can serve as much users as the application inside of it can. So as long as your application can handle a lot of users (and it should), you don't have to start the same application in multiple containers. That said, when you expect a very large number of customers, there exist docker tools like Docker Compose, Docker Swarm and a lot of alternatives that will enable you to scale up later. For now, you don't need to worry about this at all.
I think I may have found out the issue with my code...and this is actually very important if you are using the node.js googleapis client library...
It is entirely necessary to create a new oauth2Client for each request that comes in
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
Problem:
When this oauth2Client is shared it is shared by each and every person that connects at the same time...So it is necessary to create a new one each and every time a user connects to my /threads endpoint so that they do not share the same memory space (i.e. access_token etc.) while the processing is done.
Setting the client secret etc. and creating the oauth2Client just once at the top and then simply resetting the token for each request leads to the conflicts mentioned above.
Solution:
For now simply moving the creation of this oauth2Client into each and every request that comes in makes this work properly.
Each client that connects to the service NEEDS to have their own newly created oauth2Client instance or these types of conflicts will occur...
...it's kind of a no brainer but I still find it odd that there is nothing about this in the docs? and their own examples (https://github.com/googleapis/google-api-nodejs-client) seem to show only one instance being created for the whole of their app...but those examples are snippets so...

watson Text to speech using socket

How do i implement watson text to speech while running a chatbot on a local host using nodejs?
My chatbot is already running on a localhost..I want to embed watson text to speech service. I have read that it could be done by websocket interfacing. I don t have any idea on that
Assuming that you have the Conversation Simple example built by IBM Developers using Node.js and Conversation Service, you can simply have your app submit an HTTP REST request by following this tutorial using Websocket or you can leverage a language-specific SDK, I'll paste in the links below.
So, a few months ago the #kane built one example that integrates the conversation simple example with text to speech, you can easily found them in this link.
You can check this commit for saw the changes and follow the logic to implement Text to Speech in your application. You'll see this code above calling the Text to Speech service with the Services credentials in the .env file, like the comments in the code:
const TextToSpeechV1 = require('watson-developer-cloud/text-to-speech/v1');
const textToSpeech = new TextToSpeechV1({
// If unspecified here, the TEXT_TO_SPEECH_USERNAME and
// TEXT_TO_SPEECH_PASSWORD env properties will be checked
// After that, the SDK will fall back to the bluemix-provided VCAP_SERVICES environment property
// username: '<username>',
// password: '<password>',
});
app.get('/api/synthesize', (req, res, next) => {
const transcript = textToSpeech.synthesize(req.query);
transcript.on('response', (response) => {
if (req.query.download) {
if (req.query.accept && req.query.accept === 'audio/wav') {
response.headers['content-disposition'] = 'attachment; filename=transcript.wav';
} else {
response.headers['content-disposition'] = 'attachment; filename=transcript.ogg';
}
}
});
transcript.on('error', next);
transcript.pipe(res);
});
// Return the list of voices
app.get('/api/voices', (req, res, next) => {
textToSpeech.voices(null, (error, voices) => {
if (error) {
return next(error);
}
res.json(voices);
});
});
Obs.: I suggest to you see the Commit and follow the same logic to make your changes in your app.
Node SDK for Watson Services.
API Reference for using Text to Speech with Node.js

Resources