How to increase default maximum request body size in loopback 4 framework? - node.js

How to increase default maximum request body size in loopback 4 framework?
I understand express is used internally by loopback 4, what I need to do is the equivalent of setting the limit param for the body-parser expressjs middleware.
Any ideas?
Thanks

server.bind(RestBindings.REQUEST_BODY_PARSER_OPTIONS).to({
limit: '4MB',
});
or
server.bind(RestBindings.REQUEST_BODY_PARSER_OPTIONS).to({
json: {limit: '4MB'},
text: {limit: '1MB'},
});
The list of options can be found in the body-parser module.
By default, the limit is 1MB. Any request with a body length exceeding the limit will be rejected with http status code 413 (request entity too large).

I updated index.ts to add rest request body parser configuration to the application options variable:
Increase limit to 6MB shown below:
import {ApiServerApplication} from './application';
import {ApplicationConfig} from '#loopback/core';
export {ApiServerApplication};
export async function main(options: ApplicationConfig = {}) {
options.rest = {requestBodyParser: {json: {limit: '6MB'}}};
const app = new ApiServerApplication(options);
await app.boot();
await app.start();
const url = app.restServer.url;
console.log(`Server is running at ${url}`);
return app;
}

Related

How to Connect Reactivesearch to an external Elasticsearch cluster?

I am trying to connect my reacetivesearch application to external elasticsearch provider( not AWS). They dont allow making changes to the elasticsearch cluster and also using nginx in front of the cluster .
As per the reacetivesearch documentation I have cloned the proxy code and only made changes to the target and the authentication setting(as per the code below ) .
https://github.com/appbaseio-apps/reactivesearch-proxy-server/blob/master/index.js
Proxy is successfully starting and able to connect the remote cluster . However when I connect reacetivesearch app through proxy I get the following error.
Access to XMLHttpRequest at 'http://localhost:7777/testing/_msearch?' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource
I repeated the same steps with my local elasticsearch cluster using the same proxy code and getting the same error .
Just was wondering do we need to make any extra changes to make sure the proxy sending the right request to the elasticsearch cluster ? I am using the below code for the proxy.
const express = require('express');
const proxy = require('http-proxy-middleware');
const btoa = require('btoa');
const app = express();
const bodyParser = require('body-parser')
/* This is where we specify options for the http-proxy-middleware
* We set the target to appbase.io backend here. You can also
* add your own backend url here */
const options = {
target: 'http://my_elasticsearch_cluster_adddress:9200/',
changeOrigin: true,
onProxyReq: (proxyReq, req) => {
proxyReq.setHeader(
'Authorization',
`Basic ${btoa('username:password')}`
);
/* transform the req body back from text */
const { body } = req;
if (body) {
if (typeof body === 'object') {
proxyReq.write(JSON.stringify(body));
} else {
proxyReq.write(body);
}
}
}
}
/* Parse the ndjson as text */
app.use(bodyParser.text({ type: 'application/x-ndjson' }));
/* This is how we can extend this logic to do extra stuff before
* sending requests to our backend for example doing verification
* of access tokens or performing some other task */
app.use((req, res, next) => {
const { body } = req;
console.log('Verifying requests ✔', body);
/* After this we call next to tell express to proceed
* to the next middleware function which happens to be our
* proxy middleware */
next();
})
/* Here we proxy all the requests from reactivesearch to our backend */
app.use('*', proxy(options));
app.listen(7777, () => console.log('Server running at http://localhost:7777 🚀'));
Regards
Yep you need to apply CORS settings to your local elasticsearch.yaml as well as your ES service provider.
Are you using Elastic Cloud by any chance? They do allow you to modify Elasticsearch settings.
If so:
Login to your Elastic Cloud control panel
Navigate to the Deployment Edit page for your cluster
Scroll to your '[Elasticsearch] Data' deployment configuration
Click the User setting overrides text at the bottom of the box to expand the settings editor.
There's some example ES CORS settings about halfway down the reactivebase page that provide a great starting point.
https://opensource.appbase.io/reactive-manual/getting-started/reactivebase.html
You'll need to update the provided http.cors.allow-origin: setting based on your needs.

Angular how to show live value of json object

In angular , I am trying to display one json object to client html. Using below route on server side.
const express = require('express');
const jsonRoute = express.Router();
jsonRoute .route('/json').get(function (req, res) {
var JsonObj = { rank: 73 } ;
res.end(JSON.stringify(JsonObj));
});
setInterval(function(){
JsonObj.rank = parseInt(Math.random()*100);
}, 1000); // this interval may be anything.from ms to minutes.
module.exports = jsonRoute ;
this works on http://localhost:4000/json and displays ,
{"rank":73}
But does not show the values changed in setInterval. And same route i am using in a angular service (using http).
import { HttpClient } from '#angular/common/http';
import { Injectable } from '#angular/core';
#Injectable({
providedIn: 'root'
})
export class getjsonService {
uri = "http://localhost:4000/json";
constructor(private http: HttpClient) { }
jsondata(){
return this.http.get(`${this.uri}`);
}
}
This value i am displaying in component html page. The problem is, it is not showing updated value of json. Please suggest how can i show live json value in angular. Please note, in realtime my json object going to be big in size , around 100 keys and value and i want to show live value for all key. And also change value interval may not be fix as one second. it may be in milliseconds as well.
Thanks
By default http will not persistent the connection. It's http protocol limitation not from angular.If you want to show the value in real time, you need to use web sockets.
There are lot of library out there will help with real time data connection. https://socket.io/ is very popular. check this out
Tutorial :https://alligator.io/angular/socket-io/
Your problem is a structural one with how RESTful architecture works. Your server only sends the new data to your angluar app when your app asks for it, not when your server detects a change. What you will need to do is either add a set timeout in your angular project that will call the server for the new data every few seconds.
setInterval(function(){ var JsonData = getJsonService.jsondata() }, 3000);
console.log(JsonData);
//This will refetch the data every 3 seconds you can lower this number to make it refresh more often
The other option is to rewrite your server to use web-sockets as Ravin pointed out.
In your node.js code, you are re-inializing the JsonObj variable every time the request is made. You should store the value as a global variable:
const express = require('express');
const jsonRoute = express.Router();
var JsonObj = { rank: 73 };
jsonRoute .route('/json').get(function (req, res) {
res.json(JsonObj);
});
setInterval(function(){
JsonObj.rank = parseInt(Math.random()*100);
}, 1000); // this interval may be anything.from ms to minutes.
module.exports = jsonRoute ;

How to remove authentication for introspection query in Graphql

so may be this is very basic question so please bear with me. Let me explain what I am doing and what I really need.
EXPLANATION
I have created a graphql server by using ApolloGraphql (apollo-server-express npm module).
Here is the code snippet to give you an idea.
api.js
import express from 'express'
import rootSchema from './root-schema'
.... // some extra code
app = express.router()
app.use(jwtaAuthenticator) // --> this code authenticates Authorization header
.... // some more middleware's added
const graphQLServer = new ApolloServer({
schema: rootSchema, // --> this is root schema object
context: context => context,
introspection: true,
})
graphQLServer.applyMiddleware({ app, path: '/graphql' })
server.js
import http from 'http'
import express from 'express'
import apiRouter from './api' // --> the above file
const app = express()
app.use([some middlewares])
app.use('/', apiRouter)
....
....
export async function init () {
try {
const httpServer = http.createServer(app)
httpServer
.listen(PORT)
.on('error', (err) => { setTimeout(() => process.exit(1), 5000) })
} catch (err) {
setTimeout(() => process.exit(1), 5000)
}
console.log('Server started --- ', PORT)
}
export default app
index.js
require('babel-core')
require('babel-polyfill')
require = require('esm')(module/* , options */)
const server = require('./server.js') // --> the above file
server.init()
PROBLEM STATEMENT
I am using node index.js to start the app. So, the app is expecting Authorization header (JWT token) to be present all the times, even for the introspection query. But this is not what I want, I want that introspection query will be resolvable even without the token. So that anyone can see the documentation.
Please shed some light and please guide what is the best approach to do so. Happy coding :)
.startsWith('query Introspection') is insecure because any query can be named Introspection.
The better approach is to check the whole query.
First import graphql and prepare introspection query string:
const { parse, print, getIntrospectionQuery } = require('graphql');
// format introspection query same way as apollo tooling do
const introspectionQuery = print(parse(getIntrospectionQuery()));
Then in Apollo Server configuration check query:
context: ({ req }) => {
// allow introspection query
if (req.body.query === introspectionQuery) {
return {};
}
// continue
}
There's a ton of different ways to handle authorization in GraphQL, as illustrated in the docs:
Adding middleware for express (or some other framework like hapi or koa)
Checking for authorization inside individual resolvers
Checking for authorization inside your data models
Utilizing custom directives
Adding express middleware is great for preventing unauthorized access to your entire schema. If you want to allow unauthenticated access to some fields but not others, it's generally recommended you move your authorization logic from the framework layer to the GraphQL or data model layer using one of the methods above.
So finally I found the solution and here is what I did.
Let me first tell you that there were 2 middle-wares added on base path. Like this:
app //--> this is express.Router()
.use(jwtMw) // ---> these are middlewares
.use(otherMw)
The jwtMw is the one that checks the authentication of the user, and since even introspection query comes under this MW, it used to authenticate that as well. So, after some research I found this solution:
jwtMw.js
function addJWTMeta (req, res, next) {
// we can check for null OR undefined and all, then check for query Introspection, with better condition like with ignore case
if (req.body.query.trim().startsWith('query Introspection')) {
req.isIntrospection = true
return next()
}
...
...
// ---> extra code to do authentication of the USER based on the Authorization header
}
export default addJWTMeta
otherMw.js
function otherMw (req, res, next) {
if (req.isIntrospection) return next()
...
...
// ---> extra code to do some other context creation
}
export default otherMw
So here in jwtMw.js we are checking that if the query is Introspection just add a variable in req object and move forward, and in next middleware after the jwtMw.js whosoever wants to check for introspection query just check for that variable (isIntrospection, in this case) and if it is present and is true, please move on. We can add this code and scale to every middleware that if req.isIntrospection is there just carry on or do the actual processing otherwise.
Happy coding :)

How to properly use dataloaders across multiple users?

In caching per request the following example is given that shows how to use dataloaders in express.
function createLoaders(authToken) {
return {
users: new DataLoader(ids => genUsers(authToken, ids)),
}
}
var app = express()
app.get('/', function(req, res) {
var authToken = authenticateUser(req)
var loaders = createLoaders(authToken)
res.send(renderPage(req, loaders))
})
app.listen()
I'm confused about passing authToken to genUsers batch function. How should a batch function be composed to use authToken and to return each user corresponding results??
What the example is saying that genUsers should use the credentials of the current request's user (identified by their auth token) to ensure they can only fetch data that they're allowed to see. Essentially, the loader gets initialised at the start of the request, and then discarded at the end, and never recycled between requests.

Error: request entity too large in graphql services of node

I am working on node based graphql project, trying to send base64 format image to server.
I am using bodyparser module and configured it as bellow.
app.use(bodyparser.text({type: 'application/graphql'}));
app.use(bodyparser.json({limit: '50mb'}));
app.use(bodyparser.urlencoded({limit: '50mb', extended: true}));
I am able to send base64 images by direct node services but when it come to graphql services, it is throwing error as:
Error: request entity too large
For the sake of helping others who are using graphql-yoga npm package. The syntax to add bodyparser option is quite different and I took hours debugging & trying to make it work so adding the code here for reference :
const { GraphQLServer } = require("graphql-yoga");
const server = new GraphQLServer({
schema, //Your schema
context, //Your context
});
const options = {
port: 1234
bodyParserOptions: { limit: "10mb", type: "application/json" },
};
server.start(options, () =>
console.log(
"Server is running\m/"
)
);
Are you using the graphql-express npm package?
If so then the issue is likely this line:
https://github.com/graphql/express-graphql/blob/master/src/parseBody.js#L112
As you can see this sets a max size of 100kb for the request.
We ran into the same issue and fixed it by forking the repository and manually increased the max request size.
This might be a nice opportunity to contribute to the package though!
It would be nice if the limit were configurable.
For me, I specified the uploads and it worked:
const options = {
uploads: {
maxFieldSize: 1000000000,
maxFileSize: 1000000000
}
}
bodyParser only works with req.body and in your case you're handling multipart form

Resources