Problems making batch request in SPFx - sharepoint

Cheers guys and gals.
Having some problems with $batching requests to SP from SPFx.
Some background: The SP structure has one site collection with lots of subsites. Each subsite has a list whose name is identical on all subsites. I need to access all of those lists.
A normal SPHttpClient call gives me the url of all of the sites. So far so good.
The plan was then to $batch the calls to get the data from the lists. Unfortunatly I only get the answer from one of the calls. The rest of the batched calls gives me "InvalidClientQueryException". If I change the order of the calls it seems like only the first call succeeds.
const spBatchCreationOptions: ISPHttpClientBatchCreationOptions = {
webUrl: absoluteUrl
};
const spBatch: SPHttpClientBatch = spHttpClient.beginBatch(spBatchCreationOptions);
// Add three calls to the batch
const dan1 = spBatch.get("<endpoint1>",SPHttpClientBatch.configurations.v1);
const dan2 = spBatch.get("<endpoint2>",SPHttpClientBatch.configurations.v1);
const dan3 = spBatch.get("<endpoint3>",SPHttpClientBatch.configurations.v1);
// Execute the batch
spBatch.execute().then(() => {
dan1.then((res1) => {
return res1.json().then((res10) => {
console.log(res10);
});
});
dan2.then((res2) => {
return res2.json().then((res20) => {
console.log(res20);
});
});
dan3.then((res3) => {
return res3.json().then((res30) => {
console.log(res30);
});
});
});
So in this case only the call dan1 succeeds. If I however change call2 to have an identical endpoint as the first call they both succeed.
I can't really wrap my head around this, so if someone has any input it would be much appreciated.
//Dan

Make sure that your endpoint is always the same site per one batch. You cannot mix different sites within one batch. In that case only the first call(s) will succeed which are from the same site.
To overcome that you might switch to a search call in case you wanna retrieve information (what you can do over the same site URL).
See my blogpost on that for further information.

Related

Chain of endpoints in Node and Express: how to prevent that some of them stops all the series?

In some page I have to get information from 8 different endpoints. 2 of them are outside of my application and sometimes they cause an delay at displaying data. The web browser waits until the data is processed. Once they're outside of my app I can't refactor them in order to make them fast, but I need to show the information that they provide. In addition, sometimes one of them returns nothing. If so, I use default data to show to the user. The waiting time takes time for the user experience perspective.
I'm using promises to call these endpoints. Below is part of the code snippet that I am using.
The code is working fine. The issue is the delay.
First. Here is the array that contains all the service that I need to process:
var requests = [{
// 0
url: urlLocalApi + '/endpointURL_1/',
headers: {
'headers': 'apitoken'
},
}, {
// 1
url: urlLocalApi + '/endpointURL_2/',
headers: {
'headers': 'apitoken'
},
];
The code of array is encapsulated in this method:
const requests = homePageFunctions.createRequest();
Now, it is how the data is processed. I am using both 'request-promise' and 'bluebird', and a personal logger to check it out if everything goes fine.
const Promise = require("bluebird");
const request = require('request-promise');
var viewsHelper = {
getPageData: function (requests) {
return Promise.map(requests, function (obj) {
return request(obj).then(function (body) {
AppLogger.log(`Endpoint parsed`, statusLogger.infodate);
return JSON.parse(body);
});
});
}
}
module.exports = viewsHelper;
How do I call this?
viewsHelper.getPageData(requests)
.then(results => {
var output = [];
for (var i = 0; i < results.length; i++) {
output.push(results[i]);
}
// render data
res.render('homepage/index', output);
AppLogger.log(`PageData is rendered`, statusLogger.infodate);
})
.catch(err => {
console.log(err);
});
};
Take a look that inside of each index item of "output" array, there is the output of each data of each endpoint.
The problem here is:
If any of the endpoint takes long, the entire chain slows even though
if they are already processed. The web page waits in a blank mode.
How to prevent this behavior?
That is an interesting question but I have questions in order to answer it effectively.
You have Node server and client (HTML/JS)
You have 8 end points 2 are slow because you don’t have control over them.
Does the client (page) aware of the 8 end points? I .e you make 8 calls everytime you reload the page?
OR
Does the page makes one request to your node JS and your nodeJS synchronously calls the 8 end points
If it is 1 then lazy loading will work easily for you since the page is making the requests.
If it is 2 lazy loading will work only at the server side however the client will be blocked because it doesn’t know (or care how you load your data. The page made one request and it is blocked waiting for that request..
Obviously each method has pros and cons ..
One way you can solve this is to asynchronously call those end points on node and cache them and when the page makes the 1 request you have the data ready ..
Again we know very little about the situation there are many ways to solve this
Hope this helps

What is the reason for using GET instead of POST in this instance?

I'm walking through the Javascript demos of pg-promise-demo and I have a question about the route /api/users/:name.
Running this locally works, the user is entered into the database, but is there a reason this wouldn't be a POST? Is there some sort of advantage to creating a user in the database using GET?
// index.js
// --------
app.get('/api/users/:name', async (req, res) => {
try {
const data = (req) => {
return db.task('add-user', async (t) => {
const user = await t.users.findByName(req.params.name);
return user || t.users.add(req.params.name);
});
};
} catch (err) {
// do something with error
}
});
For brevity I'll omit the code for t.users.findByName(name) and t.users.add(name) but they use QueryFile to execute a SQL command.
EDIT: Update link to pg-promise-demo.
The reason is explained right at the top of that file:
IMPORTANT:
Do not re-use the HTTP-service part of the code from here!
It is an over-simplified HTTP service with just GET handlers, because:
This demo is to be tested by typing URL-s manually in the browser;
The focus here is on a proper database layer only, not an HTTP service.
I think it is pretty clear that you are not supposed to follow the HTTP implementation of the demo, rather its database layer only. The demo's purpose is to teach you how to organize a database layer in a large application, and not how to develop HTTP services.

Nodejs proxy request coalescing

I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};

Should I return an array or data one by one in Mongoose

I have this simple app that I created using IOS, it is a questionnaire app, whenever user clicks play, it will invoke a request to node.js/express server
Technically after a user clicks an answer it will go to the next question
I'm confused to use which method, to fetch the questions/question
fetch all the data at once and present it to the user - which is an array
Fetch the data one by one as user progress with the next question - which is one data per call
API examples
// Fetch all the data at once
app.get(‘/api/questions’, (req, res, next) => {
Question.find({}, (err, questions) => {
res.json(questions);
});
});
// Fetch the data one by one
app.get('/api/questions/:id', (req, res, next) => {
Question.findOne({ _id: req.params.id }, (err, question) => {
res.json(question);
});
});
The problem with number 1 approach is that, let say there are 200 questions, wouldn’t it be slow for mongodb to fetch at once and possibly slow to do network request
The problem with number 2 approach, I just can’t imagine how to do this, because every question is independent and to trigger to next api call is just weird, unless there is a counter or a level in the question mongodb.
Just for the sake of clarity, this is the question database design in Mongoose
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const QuestionSchema = new Schema({
question: String,
choice_1: String,
choice_2: String,
choice_3: String,
choice_4: String,
answer: String
});
I'd use Dave's approach, but I'll go a bit more into detail here.
In your app, create an array that will contain the questions. Then also store a value which question the user currently is on, call it index for example. You then have the following pseudocode:
index = 0
questions = []
Now that you have this, as soon as the user starts up the app, load 10 questions (see Dave's answer, use MongoDB's skip and limit for this), then add them to the array. Serve questions [index] to your user. As soon as the index reaches 8 (= 9th question), load 10 more questions via your API, and add them to the array. This way, you will always have questions available for the user.
Very good question. I guess the answer to this question depends on your future plans about this app.
If you are planning to have 500 questions, then getting them one by one will require 500 api calls. Not the best option always. On the other hand, if you fetch all of them at once, it will delay the response depending on the size of each object.
So my suggestion will be to use pagination. Bring 10 results, when the user reaches 8th question update the list with next 10 questions.
This is a common practice among mobile developers, this will also give you the flexibility to update next questions on the basis of previous responses from user. Like Adaptive test and all.
EDIT
You can add pageNumber & pageSize query parameter in your request for fetching questions from server, something like this.
myquestionbank.com/api/questions?pageNumber=10&pageSize=2
receive these parameters in on server
var pageOptions = {
pageNumber: req.query.pageNumber || 0,
pageSize: req.query.pageSize || 10
}
and while querying from your database provide these additional parameters.
Question.find()
.skip(pageOptions.pageNumber * pageOptions.pageSize)
.limit(pageOptions.pageOptions)
.exec(function (err, questions) {
if(err) {
res.status(500).json(err); return;
};
res.status(200).json(questions);
})
Note: start your pageNumber with zero (0) it's not mandatory, but that's the convention.
skip() method allows you to skip first n results. Consider the first case, pageNumber will be zero, so the product (pageOptions.pageNumber * pageOptions.pageSize) will become zero, and it will not skip any record.
But for next time (pageNumber=1) the product will result to 10. so it will skip first 10 results which were already processed.
limit() this method limits the number of records which will be provided in result.
Remember that you'll need to update pageNumber variable with each request. (though you can vary limit also, but it is advised to keep it same in all the requests)
So, all you have to do is, as soon as user reaches second last question, you can request for 10 (pageSize) more questions from the server as put it in your array.
code reference : here.
You're right, the first option is a never-to-use option in my opinion too. Fetching that much data is useless if it is not being used or has a chance to not to be used in the context.
What you can do is that you can expose a new api call:
app.get(‘/api/questions/getOneRandom’, (req, res, next) => {
Question.count({}, function( err, count){
console.log( "Number of questions:", count );
var random = Math.ceil(Math.random() * count);
// random now contains a simple var
now you can do
Question.find({},{},{limit:1,skip:random}, (err, questions) => {
res.json(questions);
});
})
});
The skip:random will make sure that each time a random question is fetched. This is just a basic idea of how to fetch a random question from all of your questions. You can put further logics to make sure that the user doesn't get any question which he has already solved in the previous steps.
Hope this helps :)
you can use the concept of limit and skip in mongodb.
when you are hitting the api for the first time you can have your limit=20 and skip=0 increase your skip count every time you that api again.
1st time=> limit =20, skip=0
when you click next => limit=20 , skip=20 and so on
app.get(‘/api/questions’, (req, res, next) => {
Question.find({},{},{limit:20,skip:0}, (err, questions) => {
res.json(questions);
});
});

Reject two in a row requests (nodejs)

Sometimes we need restrict executing repeating requests from a user if the first request has not been finished. For example: We do want register user at some service, and only after that put him into database with external id. I would like to have a service where I can set protected routes.
I have solved this problem by checking flag at request.start() -> and removed it after the request has been completed. Anyway I'm looking for your suggestion guys.
I think you need to build this into the route handlers yourself:
var inProgress = []
var handleRegisterRoute = function create(req, res, next) {
var id = req.user.id
var found = _.find(inProgress, function(item){
return item = id
})
if(found){
res.send('Dont think twice, its aright')
return
} else {
inProgress.push(id)
completeRegister()
inProgress= _.without(inProgress, id);
req.send()
return
}
}
Again this is psuedo code - to the gist of what I would write. You may need to store the "inprogress" in a better data store referenceable by your entire server farm - some sort of DB.

Resources