Create REST full service using Node js - node.js

I would like to build a node application using REST, need to read data from readymade api and store it in class for temp, and then save it to MySQL. Can anyone have any idea about this?

This is a very simple job.
Let's consider Typescript, but you can achieve the same result with JavaScript. I'll be using node-fetch as an example of the rest API library. Do note that the code might not be syntactically correct.
First: Create interfaces/classes that reflect the data you will receive from the REST API
interface Food {
id: number,
name: string,
...
}
Second:
Create a Repository
Create a class Repository which you will use to communicate with the rest API
class Repository {
async function getFoods(...args): List<Food> {
let foods = await fetch({url: "url"});
return foods;
}
async function addFood(food: Food): Response {
let response = await fetch({
url: "url-to-add-food",
method: "post",
data: JSON.stringify(food)
});
}
}
Third:
Use the repository to fetch the data and use conventional methods to save it to a MySQL database
let foods = await repository.getFoods();
foods.forEach(food => {
connection.query('INSERT INTO foods SET ?', food,
function (err, resp) {
if (err) throw err;
}
);
});

Related

How can one upload an image to a KeystoneJS GraphQL endpoint?

I'm using TinyMCE in a custom field for the KeystoneJS AdminUI, which is a React app. I'd like to upload images from the React front to the KeystoneJS GraphQL back. I can upload the images using a REST endpoint I added to the Keystone server -- passing TinyMCE an images_upload_handler callback -- but I'd like to take advantage of Keystone's already-built GraphQL endpoint for an Image list/type I've created.
I first tried to use the approach detailed in this article, using axios to upload the image
const getGQL = (theFile) => {
const query = gql`
mutation upload($file: Upload!) {
createImage(file: $file) {
id
file {
path
filename
}
}
}
`;
// The operation contains the mutation itself as "query"
// and the variables that are associated with the arguments
// The file variable is null because we can only pass text
// in operation variables
const operation = {
query,
variables: {
file: null
}
};
// This map is used to associate the file saved in the body
// of the request under "0" with the operation variable "variables.file"
const map = {
'0': ['variables.file']
};
// This is the body of the request
// the FormData constructor builds a multipart/form-data request body
// Here we add the operation, map, and file to upload
const body = new FormData();
body.append('operations', JSON.stringify(operation));
body.append('map', JSON.stringify(map));
body.append('0', theFile);
// Create the options of our POST request
const opts = {
method: 'post',
url: 'http://localhost:4545/admin/api',
body
};
// #ts-ignore
return axios(opts);
};
but I'm not sure what to pass as theFile -- TinyMCE's images_upload_handler, from which I need to call the image upload, accepts a blobInfo object which contains functions to give me
The file name doesn't work, neither does the blob -- both give me server errors 500 -- the error message isn't more specific.
I would prefer to use a GraphQL client to upload the image -- another SO article suggests using apollo-upload-client. However, I'm operating within the KeystoneJS environment, and Apollo-upload-client says
Apollo Client can only have 1 “terminating” Apollo Link that sends the
GraphQL requests; if one such as apollo-link-http is already setup,
remove it.
I believe Keystone has already set up Apollo-link-http (it comes up multiple times on search), so I don't think I can use Apollo-upload-client.
The UploadLink is just a drop-in replacement for HttpLink. There's no reason you shouldn't be able to use it. There's a demo KeystoneJS app here that shows the Apollo Client configuration, including using createUploadLink.
Actual usage of the mutation with the Upload scalar is shown here.
Looking at the source code, you should be able to use a custom image handler and call blob on the provided blobInfo object. Something like this:
tinymce.init({
images_upload_handler: async function (blobInfo, success, failure) {
const image = blobInfo.blob()
try {
await apolloClient.mutate(
gql` mutation($image: Upload!) { ... } `,
{
variables: { image }
}
)
success()
} catch (e) {
failure(e)
}
}
})
I used to have the same problem and solved it with Apollo upload link. Now when the app got into the production phase I realized that Apollo client took 1/3rd of the gzipped built files and I created minimal graphql client just for keystone use with automatic image upload. The package is available in npm: https://www.npmjs.com/package/#sylchi/keystone-graphql-client
Usage example that will upload github logo to user profile if there is an user with avatar field set as file:
import { mutate } from '#sylchi/keystone-graphql-client'
const getFile = () => fetch('https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png',
{
mode: "cors",
cache: "no-cache"
})
.then(response => response.blob())
.then(blob => {
return new File([blob], "file.png", { type: "image/png" })
});
getFile().then(file => {
const options = {
mutation: `
mutation($id: ID!, $data: UserUpdateInput!){
updateUser(id: $id, data: $data){
id
}
}
`,
variables: {
id: "5f5a7f712a64d9db72b30602", //replace with user id
data: {
avatar: file
}
}
}
mutate(options).then(result => console.log(result));
});
The whole package is just 50loc with 1 dependency :)
The easies way for me was to use graphql-request. The advantage is that you don't need to set manually any header prop and it uses the variables you need from the images_upload_handler as de docs describe.
I did it this way:
const { request, gql} = require('graphql-request')
const query = gql`
mutation IMAGE ($file: Upload!) {
createImage (data:
file: $file,
}) {
id
file {
publicUrl
}
}
}
`
images_upload_handler = (blobInfo, success) => {
// ^ ^ varibles you get from tinymce
const variables = {
file: blobInfo.blob()
}
request(GRAPHQL_API_URL, query, variables)
.then( data => {
console.log(data)
success(data.createImage.fileRemote.publicUrl)
})
}
For Keystone 5 editorConfig would stripe out functions, so I clone the field and set the function in the views/Field.js file.
Good luck ( ^_^)/*

Get all Items using the Query API through AWS Amplify

How can we get all the items by invoking dynamodb.query?
The documentation states that we need to look for the presence of LastEvaluatedKey. Just wondering how we could aggregate all the Items in an efficient way?
app.get(path, function (req, res) {
var allItems = [];
var params = {
TableName: tableName,
"IndexName": "status-index",
"KeyConditionExpression": "#attrib_name = :attrib_value",
"ExpressionAttributeNames": { "#attrib_name": "status" },
"ExpressionAttributeValues": { ":attrib_value": req.query.status },
"ScanIndexForward": false
};
dynamodb.query(params, onQuery);
function onQuery(err, data) {
if (err) {
res.json({ error: 'Could not load items: ' + err });
} else {
// Should I be aggregating all the items like this?
allItems = allItems.concat(data.Items);
// Then should I set it to res like this to return all the items?
res.json(allItems);
if (typeof data.LastEvaluatedKey != 'undefined') {
params.ExclusiveStartKey = data.LastEvaluatedKey;
dynamodb.query(params, onQuery);
}
}
}
});
Please look at comments within the code. That is where I think we need to have the appropriate code to aggregate all the items and return back the response.
I have not found a way to debug this yet as I'm fairly new to DynamoDB and AWS Amplify. Let me as well know if there is an easier way to debug this in an AWS amplify backed up GET API.
This is not a direct answer to you question, but a suggestion. I wrote an article "How To Use AWS AppSync in Lambda Functions
".
The TLDR of it is:
Create a Lambda function, which uses the AppSync client to perform
GraphQL operations. Use polyfills and install all necessary
dependencies.
Ensure the Lambda function has the right execution policy.
Use AppSync’s multi auth to allow both requests that are signed by
Amazon Cognito User Pools as well as requests that are signed using
Amazon’s IAM. This way, both the client and the server (aka. the
Lambda function) will be authenticated and can have different CRUD
permissions.
If I were you and wanted to access my data base through a Lambda function, I would follow that tutorial and do it using AppSync. One of the advantages which matters to you is that you don't have to care about LastEvaluatedKey and you can instead use AppSync's nextToken which is way more safe.
Query returns paginated results - if you want all data then you need to keep querying and aggregating until your LastEvaluatedKey is empty.
Refer: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

Sharing DB queries in Node.js methods

What's the best practice for sharing DB query code between multiple Node.js Express controller methods? I’ve searched but the samples I’ve found don’t really get into this.
For example, I have this getUser method (using Knex for MySQL) that makes a call to get user info. I want to use it in other methods but I don't need all the surrounding stuff like the response object.
export let getUser = (req: Request, res: Response, next: NextFunction) => {
try {
knex.select().where('email', req.params.email)
.table('users')
.then( (dbResults) => {
const results: IUser = dbResults[0];
res
.status(200)
.set({ 'Content-Type': 'application/json', 'Connection': 'close' })
.send(results);
});
} catch (err) {
res.send({ error: "Error getting person" + req.params.email });
return next(err);
}
};
It seems wrong to repeat the query code somewhere else where I need to get the user. Should I turn my DB query code into async functions like this example and then call them from within the controller methods that use the query? Is there a simpler way?
/**
* #param {string} email
*/
async function getUserId(email: string) {
try {
return await knex.select('id')
.where('email', email)
.table('users');
} catch (err) {
return err;
}
}
You can for example create "service" modules, which contain helpers for certain type of queries. Or you could use ORM and implement special queries in each model that is called "fat model" design. Pretty much anything goes as long as you remember to not create new knex instance in every helper module, but you just pass knex (containing its connection pool) for the helper methods so that all queries will share the same connection pool.
ORM's like objection.js also provides a way to extend query builder API so you can inherit custom query builder with any special query helper that you need.

How to make a shipping request to chronopost using soap and nodejs

I've had trouble getting to make a simple request with node-soap and chronopost (shipping platform) soap api.
the first thing I did was to follow the basic node-soap example but it just fails miserably without any real USEFUL error from chronopost.
Here's what I have:
const soap = require('soap')
const client = await soap.createClientAsync(
'https://ws.chronopost.fr/shipping-cxf/ShippingServiceWS?wsdl'
)
client.shippingV6(...somedata, (err, result) => {
if (err) {
return handleErr(); // it always fails
}
handleResult();
})
After multiple attempt it seems like chronopost api uses special root attributes (who knows why) and you need to craft the options on node-soap that actually fit their needs (yay..)
Here's what works for me
const createClientShippingServiceWS = async () => {
const wsdlOptions = {
envelopeKey: 'soapenv',
overrideRootElement: {
namespace: 'cxf',
xmlnsAttributes: [
{
name: 'xmlns:cxf',
value: 'http://cxf.shipping.soap.chronopost.fr/'
}
]
}
}
return await soap.createClientAsync(
'https://ws.chronopost.fr/shipping-cxf/ShippingServiceWS?wsdl',
wsdlOptions
)
}
Also what's the point of getting the wsdl if node-soap can't event figure how to make the response??
Thanks chronopost for being stuck in 2008

Angular UI Client Side Pagination

I would like to enable pagination and I'm torn between client side and server side pagination. In the long term (more data) it is probably better to do server side pagination, but I haven't found a good tutorial on it.
I use Angular/Express/Mongo. I have the Boostrap UI in use, and would like to use their pagination directive for pagination. I have read some articels on how to kind of do it, but they are outdated and I cannot get it to work. http://fdietz.github.io/recipes-with-angular-js/common-user-interface-patterns/paginating-through-client-side-data.html
Could anybody help me get that example to work with Bootstrap UI for Angular?
If you have a set number of items per page, you could do it this way :
Define an angular service to query the data on your server.
.factory('YourPaginationService', ['$resource',
function($resource) {
return $resource('baseUrl/page/:pageNo', {
pageNo: '#pageNo'
});
}
]);
Call it via the angular controller. Don't forget to inject your service, either globally or in the controller.
$scope.paginationController = function($scope, YourPaginationService) {
$scope.currentPage = 1;
$scope.setPage = function (pageNo) {
$scope.currentPage = pageNo;
YourPaginationService.query({
pageNo: '$scope.currentPage'
});
};
};
On express 4 (if you have it), set up your route.
app.route('/articles/page/:pageNo')
.get(data.listWithPagination) //random function name
Then you need to wire that function with the desired Mongo request in your Node controller. If you have Mongoose, it works like this :
exports.listWithPagination = function(req, res) {
var pageLimit = x; //Your hardcoded page limit
var skipValue = req.params.pageNo*pageLimit;
YourModel.find() //Your Mongoose model here, if you use Mongoose.
.skip(skipValue)
.limit(pageLimit)
.exec(function(err, data) {
if (err) {
return res.send(400, {
message: getErrorMessage(err)
});
} else {
res.jsonp(data);
}
});
};
That's how I would do it on a typical MEAN stack. If you're working with different libraries/technologies, you might need to adapt a few things.

Resources