Firebase fetching other user's Images efficiently - node.js

I have a firebase storage bucket set up for the primary purpose of storing user's profile pictures. Fetching the profile picture of the currentUser is simple, as I know the .uid. However, fetching the profile pictures for other users is not so straightforward as that first requires a query to my actual database (in this case a graph database) before I can even begin fetching their images. This process is aggravated by my backend having a three tier architecture.
So my current process is this:
get request to Node.js backend
Node.js queries graph database
Node.js sends data to frontend
frontend iteratively fetches profile pictures from other user's uid
What seems slow is the fact that my frontend has to wait for the other uids before it can even begin fetching the images. Is this unavoidable? Ideally, the images would be fetched concurrently with the info about the users.

The title here is Firebase fetching other user's Images efficiently but you're using a non-firebase database which makes it a little difficult.
The way I believe you could handle this in Firebase/Firestore would be to have duplicate data (pretty common with NoSQL databases).
Example:
Say you have a timeline feed, you probably wouldn't query the list of posts and then query user info from each of the posts. Instead, I would have a list of timeline posts for a given UID (the customer accessing the system right now), that list would include all the details needed to display the feed without another query. This could be users names, post description, and a link to their pictures based of a known bucket path to a bucket and directory structure and the UIDs. Something like gs://<my-bucket>/user-images/<a-uid>.jpg. Again, I don't have much exposure to graph databases so not sure how applicable the technique is there but I believe it could work the same.

Related

Node.js: Is there an advantage to populating page data using Socket.io vs res.render(), or vice-versa?

Let's say, hypothetically, I am working on a website which provides live score updates for sporting fixtures.
A script checks an external API for updates every few seconds. If there is a new update, the information is saved to a database, and then pushed out to the user.
When a new user accesses the website, a script queries the database and populates the page with all the information ingested so far.
I am using socket.io to push live updates. However, when someone is accessing the page for the first time, I have a couple of options:
I could use the existing socket.io infrastructure to populate the page
I could request the information when routing the user, pass it into res.render() as an argument and render the data using, for example, Pug.
In this circumstance, my instinct would be to utilise the existing socket.io infrastructure; purely because it would save me writing additional code. However, I am curious to know whether there are any other reasons for, or against, using either approach. For example, would it be more performant to render the data, initially, using one approach or the other?

How to improve performance on backend when data is fetched from multiple APIs in sequencial manner?

I am creating a Nodejs app that consumes APIs of multiple servers in a sequential manner as the next request depends on results from previous requests.
For instance, user registration is done at our platform in PostgreSQL database. User feeds, chats, posts are stored at getStream servers. User roles and permissions are managed through CMS. If in a page we want to display a list of user followers with some buttons as per the user permissions then first I need to find list of my current user's followers from getStream then enrich them with my PostgreSQL DB then fetch their permissions from CMS. Since one request has to wait for another it takes long time to give response.
I need to serve all that data in a certain format. I have used Promise.all() where requests were not depending on each other.
I thought of a way to store pre-processed data that is ready to be served but I am not sure how to do that. What is the best way to solve this problem?
sequential manner as the next request depends on results from previous requests
you could try using async/await so that each request will run in a sequential manner.

Storing data temporarily in nodejs application

I'm developing a nodejs back-end application which will fetch data from third-party Hotel API provider based on the user input from the Angular application. User should be able to filter and sort the received data like filtering price, hotel rating and sorting price, hotel name etc. but unfortunately API doesn't support this. So I thought to store that data in nodejs temporarily but I'm not sure what's the right approach. Will Redis support this?. A good suggestion will be really appreciated.
Redis should be able to support something like this. That or you could do all of the sorting client side and save all the hotel information in local or session storage. Either route you go with you'll need to make sure to save the entire response with a unique key so that it is easy to fetch, or if you save individual values to Redis make sure each has a key to query against. Also keep in mind Redis is best for caching information for short term periods rather than long term solutions like PostgreSQL and MySQL. But for just temp responses, it should be a fine approach.

generating results set on server by userId - is this something I should offload to AWS Lambda?

My backend stack is basically node (express) and mongo. Nothing too fancy.
However, I'm generating search and browse page results requests from my client side by userId. For example, if a user favorites an item, that item is added to a list of favorite itemIds on the back end for that particular user. So, if the user happens to search for "green scarf" and there's a green scarf that he'd already favorited, the resulting JSON will show this via a isFavorite: bool.
Thus, each user will have a different set of data. The favorites is just one aspect - there are a few other tags as well such as whether a friend has favorited an item, etc.
Is this a use case that warrants offloading to AWS lambda? The only things I need to do are to connect to my database, execute a query, and return the results.
Thanks
You can do it from AWS Lambda or not. What I would consider here is using Redis to get the relevant results and tags. You can use Redis in addition to Mongo or you can use Redis only with persistence.
You didn't explain your code in any detail or your load, but if you're getting a lot of those queries that need to check DB to annotate your results for every user then keeping that tags in an in-memory data store can help you with performance no matter if you use AWS Lambda or use a traditional Node process.

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

Resources