I am quite new to web development itself. And I have a problem which I have no idea how to solve or even what to google. The web app is developed with Node JS and MySQL.
So let me explain the situation.
On my website you can create and delete posts.
On "/create", the user enters contents for the post.
Then the user clicks the "submit" button, which is routered to "/create_process"(where the data is actually saved in database)
Occasionally there is some delay in loading "/create_process". So the user keeps refreshing while loading. -> Here is the problem. Everytime the user refreshes at this stage same inputs are sent again and again. The result is multiple posts with exactly same contents.
I am sure that there must be a way to block such trivial inputs.
You can have a throttle function on whatever the use is clicking. Example:
const throttle = (func, limit) => {
let inThrottle
return function() {
const args = arguments
const context = this
if (!inThrottle) {
func.apply(context, args)
inThrottle = true
setTimeout(() => inThrottle = false, limit)
}
}
}
from medium article
Related
So here's my scenario: I have a client page, where any one can fill and submit a form. the form data is stored in a database. There is a separate admin pc which is on an admin page, where every new form entry is displayed. How to update the front end of the admin everytime a new form is submitted, without refreshing/re hitting the API?
I am using React with express and MongoDB.
You can use TanStack Query (also called react query) ,
it's useQuery (a data fetching method) has a build-in property called isFetching which allow automatic fetching in a specific interval you want, like :
const { data,isLoading,isFetching } = useQuery(
['random-data'],
async () => {
const { data } = await axios.get(
'https://random-data-api.com/api/v1/random_data'
);
return data;
},
{
// refetch data every sec.
refetchInterval: 1000,
}
);
you can check more here
im asking this question because i dont know what to look for right now and my googling wasnt great so far.
I am making nodejs,express,sql app that scrape website. It takes 30 to 120 seconds to scrape whole category. How to make that function run in the background without blocking website. Frontend template engine is eJS. If its not possible to do with eJS which framework,library should i use then? I imagine it work like that
User go to /scrape
Choose category and send to server by clicking button
Some container on /scrape gets greyed out with circle rotating or
other % or smth
User can freely leave /scrape and click around webiste or just stay
on /scrape waiting for result
When user cames back to /scrape the results are there or when he
stayed results shows up with or without reloading the page
Getting full respond to these questions will be very helpfull. But just keywords for me to look up also are very helpfull
Sorry for bad english
For your case here you could use redis, or just store the data you scrape on an data structrue that you like (in my opinion, because of the category, hashmaps (js objects) are the best here) directly in nodejs. The process would then look like this:
User goes to /scrape and selects a category
Backend checks if that category was already scraped (e.g. checks for the data in the hashmap with the category name as key)
If the data exists (just check if the key is defined), then send the data to the user, else (if the data isn't stored, e.g. key == undefined), send the user a message that the data is beign scraped and just run the scrape funtion in the backround. The scrape function than scrapes the data, and if it is done, it pushes the data with the category key to the hashmap. To avoid the same categorys beign scraped at the same time, you could add a "pending" property to the hashmap. So if the user accesses the /scrape route, you check in the hashmap if the category key exsists, if yes and pending is false, send data, if yes and pending is true, send wait alert, if the key doesn't exists, start the scrape function and send a wait alter.
Additionally, to make the whole thing "live", you could use socket.io (https://socket.io/) to implement websockets. You could then send the scraped data to the user without the user having to reload the page to check if the scrape process is done.
I made a little exmaple, that doesn't implement scraping, but should make the whole logic here a little bit easier to understand. I also added some explenation to the code in form of comments.
const express = require("express");
const app = express()
// the data hashmap
const data = {};
// scrape function
const scrape = async (id) => {
// set pending to true to prevent multiple scraped on same category
data[id] = { pending: true, data: {} }
// this would be your scrape functio, I used a promise here that
// resolves after 5 seconds with an random number just for
// simplicity
const a = await new Promise((res, rej) => {
setTimeout(() => { res(Math.floor(Math.random()*1000)) }, 5000)
})
// if the data was scraped, set pending to false and add the data
data[id].pending = false;
data[id].data = { id: a }
}
// "scrape" route
app.get("/:id", async (req, res) => {
const { id } = req.params; // if would represent category
// check if id (category) is not in hashmap, if not, then
// start the scrape process and send a wait alert
if (data[id] == undefined) {
scrape(id);
res.send("scraping...")
// if the data is already beign scraped, send a wait alert
// the pending property prvents that multiple people trigger
// the scrape of the same category
} else if (data[id].pending == true) {
res.send("still scraping...")
// lastly, if the data is defined, and is not pending, then
// you could just send it
} else {
res.send(data[id].data)
}
})
// to test this, go to the root with any id, could be string, number,
// whatever (e.g. /1337 or /helloworld), wait for 5 seceonds (or
// leave and come back after 5 seconds), refresh the page and you can
// see the random number. If you now go to an other route (e.g /test)
// and go back to the last one, you still can see the data, if you again
// wait for 5 seconds and then go back to /test, you can see the data.
// You can also open multiple tabs at the same time, which means the
// scraping is asynchronous, so you don't have to wait for the
// one category to be scraped to scrape the next
app.listen(5000)
In some page I have to get information from 8 different endpoints. 2 of them are outside of my application and sometimes they cause an delay at displaying data. The web browser waits until the data is processed. Once they're outside of my app I can't refactor them in order to make them fast, but I need to show the information that they provide. In addition, sometimes one of them returns nothing. If so, I use default data to show to the user. The waiting time takes time for the user experience perspective.
I'm using promises to call these endpoints. Below is part of the code snippet that I am using.
The code is working fine. The issue is the delay.
First. Here is the array that contains all the service that I need to process:
var requests = [{
// 0
url: urlLocalApi + '/endpointURL_1/',
headers: {
'headers': 'apitoken'
},
}, {
// 1
url: urlLocalApi + '/endpointURL_2/',
headers: {
'headers': 'apitoken'
},
];
The code of array is encapsulated in this method:
const requests = homePageFunctions.createRequest();
Now, it is how the data is processed. I am using both 'request-promise' and 'bluebird', and a personal logger to check it out if everything goes fine.
const Promise = require("bluebird");
const request = require('request-promise');
var viewsHelper = {
getPageData: function (requests) {
return Promise.map(requests, function (obj) {
return request(obj).then(function (body) {
AppLogger.log(`Endpoint parsed`, statusLogger.infodate);
return JSON.parse(body);
});
});
}
}
module.exports = viewsHelper;
How do I call this?
viewsHelper.getPageData(requests)
.then(results => {
var output = [];
for (var i = 0; i < results.length; i++) {
output.push(results[i]);
}
// render data
res.render('homepage/index', output);
AppLogger.log(`PageData is rendered`, statusLogger.infodate);
})
.catch(err => {
console.log(err);
});
};
Take a look that inside of each index item of "output" array, there is the output of each data of each endpoint.
The problem here is:
If any of the endpoint takes long, the entire chain slows even though
if they are already processed. The web page waits in a blank mode.
How to prevent this behavior?
That is an interesting question but I have questions in order to answer it effectively.
You have Node server and client (HTML/JS)
You have 8 end points 2 are slow because you don’t have control over them.
Does the client (page) aware of the 8 end points? I .e you make 8 calls everytime you reload the page?
OR
Does the page makes one request to your node JS and your nodeJS synchronously calls the 8 end points
If it is 1 then lazy loading will work easily for you since the page is making the requests.
If it is 2 lazy loading will work only at the server side however the client will be blocked because it doesn’t know (or care how you load your data. The page made one request and it is blocked waiting for that request..
Obviously each method has pros and cons ..
One way you can solve this is to asynchronously call those end points on node and cache them and when the page makes the 1 request you have the data ready ..
Again we know very little about the situation there are many ways to solve this
Hope this helps
I am trying to prevent who can see events for a particular page, so I am determining and storing the ids of which users have access to a particular page. In channels.js, I can add users to the new page-level channel in app.on('connection') by iterating through the pages like this: app.channel('page-' + pageId).join(connection)
The problem is that a user doesn't start getting broadcasts from that page until after they refresh the browser and re-connect.
What I want to happen is have all connections for a an allowed user start getting broadcasts when the page is created. Is there a way to do that in channels.js, or can I tell it who to start broadcasting to in a hook for the page creation?
Editing to add the last thing I tried. "Users-pages" is an associative entity that links Users and Pages.
app.service('users-pages').on('created', userPage => {
const { connections } = app.channel(app.channels).filter(connection =>
connection.user.id === userPage.userId
)
app.channel('page-' + userPage.pageId).join(connections)
})
The keeping channels updated documentation should have the answer you are looking for. It uses the users service but can be updated for pages accordingly which could look like this:
app.service('pages').on('created', page => {
// Assuming there is a reference of `users` on the page object
page.users.forEach(user => {
// Find all connections for this user
const { connections } = app.channel(app.channels).filter(connection =>
connection.user._id === user._id
);
app.channel('page-' + pageId).join(connections);
});
});
Using app.channel('page-' + pageId).join(connections) wasn't working for me. But looping through each connection works fine:
connections.forEach(connection => {
app.channel('page-' + userPage.pageId).join(connection)
})
I use cache-all for data caching. Suppose need to add new information. It is added, and when a request to display all data occurs, new information is not displayed among all data. And so that the new data added also become displayed in the request, need to wait until the cache storage timer disappears. How can you make the cache update when you add, update, or delete data?
index.js:
const cache = require('cache-all')
cache.init({
expireIn: 90,
isEnable: true
})
app.listen(port, () => {
console.log(`Server has been started on ${port}`)
})
route:
const cache = require('cache-all')
router.get('/get_all', cache.middleware(90), controller.getAll)
Or advise the normal module for data caching but that it was easy to use.
you've stumbled upon one of the greatest problems in computer science, "Cache Invalidation".
The approach will greatly depend on your application, but usually a good way to get going is to manually invalidate the cache when you know you've changed it. For example, if you cache user profiles, you might use this on a /user/:user_id call to get that users profile. To invalidate the cache (or update it) you would remove the cache entry when a call is made that changes the users profile. Here's some pseudo code to illustrate.
const cache = require('cache-all');
router.get('/user/:username', (req, res) => {
const username = req.params.username;
return cache.get('user:'+username).then(userProfile => {
if (!userProfile) {
// There were no entries in the cache, so we had a "cache miss".
// We will need to look this up in the database, then potentially
// add it to the cache after.
}
return res.json(userProfile);
});
});
router.patch('/user/:username', (req, res) => {
const username = req.params.username;
const profileChanges = req.body.profile;
let profileToReturn = {};
return database.user.update(username, profileChanges).then(newProfile => {
profileToReturn = newProfile;
// We have updated something we know will be in the cache, so we need
// to either invalidate it (removing the entry) or update it. In this
// case we've decided to update the cache since we think it'll be used
// again very quickly.
return cache.set('user:'+username, profileToReturn);
}).then(cacheResult => {
return res.json(profileToReturn);
});
})
You can see from this example we have two endpoints, one which reads from the cache if it can (otherwise it goes to the database). And one which updates a value, and also updates the cache. Much of this will be up to your application, your reasons for caching, your load, etc. But this should help you along.
Can't you use cache.set('foo', 'bar') to set the new value? Would that not update the cache?