Allowing multiple routes for a single Backbone Model - node.js

I am fairly new to Backbone and am creating some basic API for a site. However, I came across a problem that I have yet to find a solution to.
On my front end I have a Backbone Model called Item that has a urlRoot: "/item". Now this urlRoot is used by Backbone to send different HTTP requests to the server correct? So if my backbone model uses Item.fetch() it will send a GET request, and a Item.save() may send a POST request.
My backend then has a bunch of listener functions to handle different cases like "/createItem", "/updateItem", "deleteItem", ect. Can all of these be handled using the basic urlRoot that is provided? Or do I have to specific what route to emit explicitly?

If you want to follow the default way of doing it, your backend should not use different names for each of the CRUD operations. It should use the url you specified using rootUrl + /id of the model, and should handle an HTTP POST, GET, PUT, or, DELETE for that single URL (with the exception that the POST URL has no /id attached).
See: http://backbonejs.org/#Sync

Since you are using an unconventional set of rest endpoints you will need to provide a custom sync method for your model:
sync : function(method, model, options) {
if (method === 'read') {
this.baseUrl = '/item';
return Backbone.sync.apply(this, arguments);
} ...
}

Related

Efficient way to use api request header in controller, models in Node Express Js

I am pretty much new to Node JS. We do have a requirement to use the request header from router to model class.
Let us assume an simple router
router.ts
router.delete(
'/sample/:id',
validateRequest(),
async function (req: Request, res: Response, next: NextFunction) {
try {
const solution: string = req.header('Some Header Value') || '';
await Controller.someMethods(req.params.id, solution);
return res.json(new HttpResponse('SUCCESS', {}, {}));
} catch (err) {
return next(err);
}
},
);
This is our router, Here we should be able to read "solution" in controller, service, and model classes. Right now we have passed this as an argument to different components. Is there any better approach to read the header value which is in the current request scope?
Similarly using components in spring framework, or sesison management or any other better approach other than passing the header value as an argument at each component level.
regards
Eresh
TL; DR; There is no other way than just mapping it manually.
Express is quite minimalistic so we don't such abstractions as Spring, the truth is that Node.js is different from Java. In Java we spawn a thread per request, so every request has a single thread. Whereas Node.js is async and single-threaded, so multiple requests share the same thread so you need to pass the values down your calls because there is no out-of-the-box solution on storing global values for a request.
If you want to have an access to the headers somewhere inside an application you could build a system that does so. The first step is to add the middleware that stores the headers in service with a unique ID associated with it and somehow passes this ID down the road. Then whenever you know the ID you could call the service to get the data for you, though you still will need to pass the ID down the road. I think you should not care about that, and that is okay. I would refactor your code in a way that a Controller methods access req and res then all the logic of working with these objects will be incapsulated here, whereas the service layer will expect raw data that know nothing about the transport layer that controllers operate with. Thus you can call services from another service, because they know nothing about the request and response.
FWIW, if you need a better framework use Nest.js, it is great and advanced, also it uses decorators (in a way similar to annotations in Spring). For instance you could just inject the header value as a call argument for your method in the following way #Header("some-header") solution: string
Best regards.

Downsides of an API which neglects http method and path

I'm wondering what the downsides would be for a production server whose api is totally ignorant of the HTTP request path. For example, an api which is fully determined by query parameters, or even fully determined by the http body.
let server = require('http').createServer(async (req, res) => {
let { headers, method, path, query, body } = await parseRequest(res);
// `headers` is an Object representing headers
// `method` is 'get', 'post', etc.
// `path` could look like /api/v2/people
// `query` could look like { filter: 'age>17&age<35', page: 7 }
// `body` could be some (potentially large) http body
// MOST apis would use all these values to determine a response...
// let response = determineResponse(headers, method, path, query, body);
// But THIS api completely ignores everything except for `query` and `body`
let response = determineResponse(query, body);
doSendResponse(res, response); // Sets response headers, etc, sends response
});
The above server's API is quite strange. It will completely ignore the path, method, headers, and body. While most APIs primarily consider method and path, and look like this...
method path description
GET /api - Metadata about api
GET /api/v1 - Old version of api
GET /api/v2 - Current api
GET /api/v2/people - Make "people" db queries
POST /api/v2/people - Insert a new person into db
GET /api/v2/vehicles - Make "vehicle" db queries
POST /api/v2/vehicles - Insert a new vehicle into db
.
.
.
This API only considers url query, and looks very different:
url query description
<empty> - Metadata about api
apiVersion=1 - Old version of api
apiVersion=2 - Current api
apiVersion=2&table=people&action=query - Make "people" db queries
apiVersion=2&table=people&action=insert - Add new people to db
.
.
.
Implementing this kind of api, and ensuring clients use the correct api schema is not necessarily an issue. I am instead wondering about what other issues could arise for my app, due to writing an api with this kind of schema.
Would this be detrimental for SEO?
Would this be detrimental to performance? (caching?)
Are there additional issues that occur when an api is ignorant of method and url path?
That's indeed very unusual but it's basically how a RPC web api would work.
There would not be any SEO issue as far as I know.
Performance/caching should be the same, as the full "path" is composed of the same parameters in the end.
It however would be complicated to use with anything that doesn't expect it (express router, fancy http clients, etc.).
The only fundamental difference I see is how browsers treat POST requests as special (e.g. won't ever be created just with a link), and your API would expose deletion/creation of data just with a link. That's more or less dangerous depending on your scenario.
My advice would be: don't do that, stick to standards unless you have a very good reason not to.

At what point are request and response objects populated in express app

I’m always coding backend api’s and I don’t really get how express does its bidding with my code. I know what the request and response objects offer, I just don’t understand how they come to be.
This simplified code for instance:
exports.getBlurts = function() {
return function(req, res) {
// build query…
qry.exec(function(err, results) {
res.json(results);
}
});
}
}
Then I’d call in one of my routes:
app.get('/getblurts/, middleware.requireUser, routes.api.blurtapi.getBlurts());
I get that the function is called upon the route request. It’s very abstract to me though and I don’t understand the when, where, or how as it pertains to the req\res params being injected.
For instance. I use a CMS that modifies the request object by adding a user property, which is then available globally on all requests made whether ajax or otherwise, making it easy at all times to determine if a user is logged in.
Are the req and res objects just pre-cooked by express but allow freedom for them to be modified to your needs? When are they actually 'built'
At its heart express is actually using node's default http-module and passing the express-application as a callback to the http.createServer-function. The request and response objects are populated at that point, i.e. from node itself for every incoming connection. See the nodeJS documentation for more details regarding node's http-module and what req/res are.
You might want to check out express' source code which shows how the express application is passed as a callback to http.createServer.
https://github.com/expressjs/express/blob/master/lib/request.js and https://github.com/expressjs/express/blob/master/lib/response.js show how node's request/response are extended by express specific functions.

Cloudfront cache with GraphQL?

At my company we're using graphql for production apps, but only for private ressources.
For now our public APIs are REST APIs with a Cloudfront service for cache. We want to transform them as GraphQL APIs, but the question is : how to handle cache properly with GraphQL ?
We thought using a GET graphql endpoint, and cache on querystring but we are a bit affraid of the size of the URL requested (as we support IE9+ and sell to schools with sometime really dummy proxy and firewalls)
So we would like to use POST graphQL endpoint but...cloudfront cannot cache a request based on its body
Anyone has an idea / best practice to share ?
Thanks
The two best options today are:
Use a specialized caching solution, like FastQL.io
Use persisted queries with GET, where some queries are saved on your server and accessed by name via GET
*Full disclosure: I started FastQL after running into these issues without a good solution.
I am not sure if it has a specific name, but I've seen a pattern in the wild where the graphQL queries themselves are hosted on the backend with a specific id.
It's much less flexible as it required pre-defined queries baked in.
The client would just send arguments/params and ID of said pre-defined query to use and that would be your cache key. Similar to how HTTP caching would work with an authenticated request to /my-profile with CloudFront serving different responses based on auth token in headers.
How the client sends it depends on your backends implementation of graphQL.
You could either pass it as a white listed header or query string.
So if the backend has defined a query that looks like
(Using pseudo code)
const MyQuery = gql`
query HeroNameAndFriends($episode: int) {
hero(episode: $episode) {
name
friends {
name
}
}
}
`
Then your request would be to something like api.app.com/graphQL/MyQuery?episode=3.
That being said, have you actually measured that your queries wouldn't fit in a GET request? I'd say go with GET requests if CDN Caching is what you need and use the approach mentioned above for the requests that don't fit the limits.
Edit: Seems it has a name: Automatic Persisted Queries. https://www.apollographql.com/docs/apollo-server/performance/apq/
Another alternative to remain with POST requests is to use Lambda#Edge on your CloudFront and by using DynamoDB tables to store your caches similar to how CloudFlare workers do it.
async function handleRequest(event) {
let cache = caches.default
let response = await cache.match(event.request)
if (!response){
response = await fetch(event.request)
if (response.ok) {
event.waitUntil(cache.put(event.request, response.clone()))
}
}
return response
}
Some reading material on that
https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/
https://aws.amazon.com/blogs/networking-and-content-delivery/leveraging-external-data-in-lambdaedge/
An option I've explored on paper but not yet implemented is to use Lambda#Edge in request trigger mode to transform a client POST to a GET, which can then result in a cache hit.
This way clients can still use POST to send GQL requests, and you're working with a small number of controlled services within AWS when trying to work out the max URL length for the converted GET request (and these limits are generally quite high).
There will still be a length limit, but once you have 16kB+ GQL requests, it's probably time to take the other suggestion of using predefined queries on server and just reference them by name.
It does have the disadvantage that request trigger Lambdas run on every request, even a cache hit, so will generate some cost, although the lambda itself should be very fast/simple.

How to avoid fat models in a node.js + mongoose app?

The app is using express 3. Here is a barebones example of a route that fetches data from the database:
var Post = mongoose.model('Post')
app.get('post/:id/loompas', function(req, res) {
Post.getLoompas(function(err, data){
res.render('x', data)
})
})
Where Posts.getSomeData is defined as instance methods in /models/post.js, and sometimes accesses external APIs:
PostSchema.method('getLoompas', function(callback){
var post = this
API.get('y', function(x){
this.save(x)
callback(x)
})
})
This is starting to smell, and doesn't look like it belongs along the Schema definition. The collection of methods could grow quite large.
What design patterns are recommended to separate these concerns and avoid extremely fat models? A service layer for external API calls? Any interesting solutions out there?
This does indeed smell a little bit.
I would use the approach of considering your web app merely as a view of your application.
The best way to ensure this is to never use your mongoose models from your webapp. You could have your webapp living in a process and your model specific logic in another process. The job of that second process would be to take care of your business logic and persistence layer (mongoDB), making it the M in MVC.
Accessing external APIs would take place in that Model layer, we your can separate it from your persistence implementation.
There's a way of communicating between node processes that I like, it's dnode. Once set up, it looks like you are communicating with objects and callbacks within your own process. I would make the webapp and the business app communicating through this in order to get data. The webapp needn't manipulate the actual data and instead sends message to the Model layer (as described by the MVC pattern).
This ensures complete separation between controller/view (webapp) and model+persistence.
One side effect of this organization is that you can easily write other clients of your application, for example a CLI client or a RESTful API.
Are you trying to get id and somedata from url (post/:id/:somedata) ? to construct schema ?
Ideally one should use :
app.post('/reg', function(request, response){
console.log(request.body.name);
console.log(request.body.email);
...
}
which is when form is submitted on the 'reg' HTML form page, where you can set all the variables(name,email) in object. In app.post you can get the schema definition from the request itself without having to scan through the url to get variables.
If you still want to know how to get the variables from the url then do this in app.get:
vars=request.url.split('/');
//vars contains all the variables you have to use.
//use vars to create schema
After you get/create the schema directly pass it to the function / or iterate through the object elements calling that function.

Resources