I want to make a progress bar kind of telling where the user where in process of fetching the API my backend is. But it seems like every time I send a response it stops the request, how can I avoid this and what should I google to learn more since I didn't find anything online.
React:
const {data, error, isError, isLoading } = useQuery('posts', fetchPosts)
if(isLoading){<p>Loadinng..</p>}
return({data&&<p>{data}</p>})
Express:
app.get("api/v1/testData", async (req, res) => {
try {
const info = req.query.info
const sortByThis = req.query.sortBy;
if (info) {
let yourMessage = "Getting Data";
res.status(200).send(yourMessage);
const valueArray = await fetchData(info);
yourMessage = "Data retrived, now sorting";
res.status(200).send(yourMessage);
const sortedArray = valueArray.filter((item) => item.value === sortByThis);
yourMessage = "Sorting Done now creating geojson";
res.status(200).send(yourMessage);
createGeoJson(sortedArray)
res.status(200).send(geojson);
}
else { res.status(400) }
} catch (err) { console.log(err) res.status(500).send }
}
You can only send one response to a request in HTTP.
In case you want to have status updates using HTTP, the client needs to poll the server i.e. request status updates from the server. Keep in mind though that every request needs to be processed on the server side and will take resources away which are then not available for other (more important) requests from other clients. So don't poll too frequently.
If you want to support long running operations using HTTP have a look at the following API design pattern.
Alternatively you could also use a WebSockets connection to push updates from the server to the client. I assume your computation on the backend will not be minutes long and you want to update the client in real-time, so probably WebSockets will be the best option for you. A WebSocket connection has, once established, considerably less overhead than sending huge HTTP requests/ responses between client and server.
Have a look at this thread which dicusses abovementioned and other possibilites.
Related
I'm using socket.io and used rate-limiter-flexible and limiter to limit request rate, but I noticed the route /socket.io isn't receiving 429 from either, in fact, I can't log any requests coming from this route by using app.use('/socket.io')..
I think socket.io is doing some treatment on this route under the hood, is that correct? If so, how can I make sure requests to /socket.io also receive 429 after the limit is reached?
Rate limiter on Connect:
this.io.on(this.SocketConstants.CONNECTION, async (client) => {
this.client = client;
try {
await this.rateLimiter.consume(client.handshake.address);
} catch (rejRes) {
// On flood
client.error('Too many requests');
client.disconnect(true);
}
Rate limiter on http server:
class RateLimiting {
constructor() {
this.limiter = new RateLimiter(2, 'minute', true);
this.index = this.index.bind(this);
}
index(req,res,next){
try {
this.limiter.removeTokens(1, function(err, remainingRequests) {
if (remainingRequests < 1) {
res.writeHead(429, {'Content-Type': 'text/plain;charset=UTF-8'});
res.end('429 Too Many Requests - your IP is being rate limited');
// res.status(429).json({ message: 'Too many requests' });
} else {
next();
}
});
}
catch(err){
console.log('Error', err);
}
}
}
Edit
To anyone that has a similar problem, I ended up trying several different ways to accomplish this and the easiest where these:
Write your own allowRequest
AllowRequest is a pass/fail function that you can use to override the default "checkRequest" function (reference here)
checkBucket(err, remainingRequests) {
remainingRequests < 1 ? false : true;
}
allowRequest(req, callback) {
const limit = this.limiter.removeTokens(1, this.checkBucket);
callback(limit ? null : 'Too many requests', limit);
}
And Server is started as
this.io = this.socketServer(server,
{
allowRequest: this.allowRequest
});
Pick a different route, use express to add any middlewares needed and disable the default route
You can accomplish this by setting serveClient to false and setting the path to whatever you need.
this.io = this.socketServer(server,
{
path: '/newSocketRoute',
serveClient: false
});
This didn't quite work for me, so there's probably something wrong in the way I'm serving the socket files, but this is what it looked like:
app.get("/socket", this.limiter.initialize(), function(req, res) {
if (0 === req.url.indexOf('/newSocketRoute/socket.io.js.map')) {
res.sendFile(join(__dirname, "../../node_modules/socket.io-client/dist/socket.io.js.map"));
} else if (0 === req.url.indexOf('/newSocketRoute/socket.io.js')) {
res.sendFile(join(__dirname, "../../node_modules/socket.io-client/dist/socket.io.js"));
}
});
Socket.io attaches itself to your http server by inserting itself as the first listener to the request event (socket.io code reference here) which means it totally bypasses any Express middleware.
If you're trying to rate limit requests to /socket.io/socket.io.js (the client-side socket.io code), then you could create your own route for that file in Express using your own custom router with a different path and have your client just use the Express version of the path and you could then disable serving that file through socket.io (there's an option to disable it).
If you're trying to rate limit incoming socket.io connections, then you may have to modify your rate limiter so it can participate in the socket.io connect event.
Now that I think of it, you could hack the request event listener list just like socket.io does (you can see the above referenced code link for how it does it) and you could insert your own rate limiting before it gets to see the request. What socket.io is doing is implementing a poor man's middleware and cutting to the front of the line so that they can get first crack at any incoming http request and hide it from others if they have handled it. You could do the same with your rate limiter. Then, you'd be in an "arm's race" to see who gets first crack. Personally, I'd probably just hook the connect event and kill the connection there if rate limiting rules are being violated. But, you could hack away and get in front of the socket.io code.
My node app is simple, for a request querying the total customers in mysql databases, i do a block call using await to wait for query to finish.
The problem is it can only handle ~75 requests per second, too low.
So i try to return 200 whenever i get the request, telling caller i get the request.
Then return the query result when ready, mysql query could take a while.
But not working for me yet, this is the code:
router:
router.get('', controller.getCustomers);
controller:
const getCusomers = (req, res) => {
try {
service.getCustomers(res);
res.write('OK');
//res.send(200); this will end the response before query ends
} catch (err) {
res.end(err);
}
service:
const getCustomers = async (res) => {
customers = await mysqlPool.query('select * from cusomerTable');
res.send(customers);
}
Error: Can't set headers after they are sent to the client.
How to fix this pls ?
You can use Server-Sent Events. You can send an event as soon as you get the request.
When the result is ready, you can send another event. Kindly find my GitHub repo useful for a SSE sample.
OR
You can just use res.write() to stream the response. Find this StackOverflow answer useful.
I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};
I built a simple API endpoint with NodeJS using Sails.js.
When someone access my API endpoint, the server starts to wait for data and whenever a new data appears, he broadcasts it using sockets. Each client should receive his own stream of data based on his user input.
var Cap = require('cap').Cap;
collect: function (req, res) {
var iface = req.param("ip");
var c = new Cap(),
device = Cap.findDevice(ip);
c.on('data', function(myData) {
sails.sockets.blast('message', {"host": myData});
});
});
The response do not complete (I never send a res.json() - what actually happens is that the browser keep loading - but the above functionality works).
2 Problems:
I'm trying to subscribe and unsubscribe to to this API endpoint from my client (using RxJS). When I subscribe, I start to receive data via sockets - but I can't unsubscribe to the API endpoint (the browser expect the request to be completed).
Each client should subscribe to his own socket room based on the request IP parameter ( see updated code ). Currently it blasts the message to everyone.
How I can create a stream/service-like API endpoint with Sails.js that will emit new data to each user based on his input?
My goal is to be able to subscribe / unsubscribe to this API endpoint from each client.
Revised Answer
Let's assume your API endpoint is defined in config/routes.js like this:
...
'get /collect': 'SomeController.collectSubscribe',
'delete /collect': 'SomeController.collectUnsubscribe',
Since each Cap instance is tied to one device, we need one instance for each subscription. Instead of using the sails join/leave methods, we keep track of Cap instances in memory and just broadcast to the request socket's id. This works because Sails sockets are subscribed to their own ids by default.
In api/controllers/SomeController.js:
// In order for the `Cap` instances to persist after `collectSubscribe` finishes, we store them all in an Object, associated with which socket the were created for.
var caps = {/* req.socket.id: <instance of Cap>, */};
module.exports = {
...
collectSubscribe: function(req, res) {
if (!res.isSocket) return res.badRequest("I need a websocket! Help!");
if (!!caps[req.socket.id]) return res.badRequest("Dude, you are already subscribed.");
caps[req.socket.id] = new Cap();
var c = caps[req.socket.id]; // remember that `c` is a reference to our new `Cap`, not a copy.
var device = c.findDevice(req.param('ip'));
c.open(device, ...);
c.on('data', function(myData) {
sails.sockets.broadcast(req.socket.id, 'message', {host: myData});
});
return res.ok();
},
collectUnsubscribe: function(req, res) {
if (!res.isSocket) return res.badRequest("I need a websocket! Help!");
if (!caps[req.socket.id]) return res.badRequest("I can't unsubscribe you unless you actually subscribe first.");
caps[req.socket.id].removeAllListeners('data');
delete caps[req.socket.id];
return res.ok();
}
}
Basically, it goes like this: when a browser request triggers collectSubscribe, a new Cap instance listens to the provided IP. When the browser triggers collectUnsubscribe, the server retreives that Cap instance, tells it to stop listening, and then deletes it.
Production Considerations: please be aware that the list of Caps is NOT PERSISTENT (since it is stored in memory and not a DB)! So if your server is turned off and rebooted (due to lightning storm, etc), the list will be cleared, but considering that all websocket connections will be dropped anyway, I don't see any need to worry about this.
Old Answer, Kept for Reference
You can use sails.sockets.join(req, room) and sails.sockets.leave(req, room) to manage socket rooms. Essentially you have a room called "collect", and only sockets joined in that room will receive a sails.sockets.broadcast(room, eventName, data).
More info on how to user sails.sockets here.
In api/controllers/SomeController.js:
collectSubscribe: function(req, res) {
if (!res.isSocket) return res.badRequest();
sails.sockets.join(req, 'collect');
return res.ok();
},
collectUnsubscribe: function(req, res) {
if (!res.isSocket) return res.badRequest();
sails.sockets.leave(req, 'collect');
return res.ok();
}
Finally, we need to tell the server to broadcast messages to our 'collect' room.
Note that this only need to happen once, so you can do this in a file under the config/ directory.
For this example, I'll put this in config/sockets.js
module.exports = {
// ...
};
c.on('data', function(myData) {
var eventName = 'message';
var data = {host: myData};
sails.sockets.broadcast('collect', eventName, data);
});
I am assuming that c is accessible here; If not, you could define it as sails.c = ... to make it globally accessible.
I have a node.js server A with mongodb for database.
There is another remote server B (doesn't need to be node based) which exposes a HTTP/GET API '/status' and returns either 'FREE' or 'BUSY' as the response.
When a user hits a particular API endpoint in server A(say POST /test), I wish to start polling server B's status API every minute, until server B returns 'FREE' as the response. The user doesn't need to wait till the server B returns a 'FREE' response (polling B is a background job in server A). Once the server A gets a 'FREE' response from B, it shall send out an email to the user.
How can this be achieved in server A, keeping in mind that the number of concurrent users can go large ?
I suggest you use Agenda. https://www.npmjs.com/package/agenda
With agenda you can create recurring schedules under which you can schedule anything pretty flexible.
I suggest you use request module to make HTTP get/post requests.
https://www.npmjs.com/package/request
Going from the example in node.js docs I'd go with something like the code here. I tested and it works. BTW, I'm assuming here that the api response is something like {"status":"BUSY"} & {"status":"FREE"}
const http = require('http');
const poll = {
pollB: function() {
http.get('http://serverB/status', (res) => {
const { statusCode } = res;
let error;
if (statusCode !== 200) {
error = new Error(`Request Failed.\n` +
`Status Code: ${statusCode}`);
}
if (error) {
console.error(error.message);
res.resume();
} else {
res.setEncoding('utf8');
let rawData = '';
res.on('data', (chunk) => { rawData += chunk; });
res.on('end', () => {
try {
const parsedData = JSON.parse(rawData);
// The important logic comes here
if (parsedData.status === 'BUSY') {
setTimeout(poll.pollB, 10000); // request again in 10 secs
} else {
// Call the background process you need to
}
} catch (e) {
console.error(e.message);
}
});
}
}).on('error', (e) => {
console.error(`Got error: ${e.message}`);
});
}
}
poll.pollB();
You probably want to play with this script and get rid of unnecessary code for you, but that's homework ;)
Update:
For coping with a lot of concurrency in node.js I'd recommend to implement a cluster or use a framework. Here are some links to start researching about the subject:
How to fully utilise server capacity for Node.js Web Apps
How to Create a Node.js Cluster for Speeding Up Your Apps
Node.js v7.10.0 Documentation :: cluster
ActionHero.js :: Fantastic node.js framework for implementing an API, background tasks, cluster using http, sockets, websockets
Use a library like request, superagent, or restify-clients to call server B. I would recommend you avoid polling and instead use a webhook when calling B (assuming you are also authoring B). If you can't change B, then setTimeout can be used to schedule subsequent calls on a 1 second interval.