Where to put loading of HashTable - node.js

I have an nodejs application having some REST APIs and is working well.
One of the API, say /billinginfo POST, receives a client code and a piece of data via JSON format, and it will make a API call, to another service, say /client GET, that returns country of the client, and it then inserts the data into the database.
I later found out that, for performance improvements, I can actually load all the client's countries into a HashTable 1st with key being the client code with one API call that return all the clients and avoid calling that /client GET API many times and will look into the HashTable instead.
My question is now, where do I normally put codes loading the HashTable to be used by the /billinginfo POST API?
Update:
The country and the client info is on AWS AURORA in the cloud of another system and /billinginfo POST is writing to a on-premise MS SQL database hence I must make calls to /client GET API.

I think what you can do is put the data of countries in your database and put the proper indexing, and after that query from your database.It'll be quicker than calling third party API.

Related

CosmosDB return data from external API on read

I am attempting to write an Azure CosmosDB integration (Core SQL Api) that integrates with an external service to provide some of the query data. As an example, I need a query made on Cosmos DB to convert some of the data returned (e.g. ID's) by the query into real data by calling an external service via a REST API. This should only happen when querying certain columns.
I initially investigated using a JS stored procedure and/or a UDF to make this external call, but the JS environment seems to be extremely limited and doesn't provide any way to make external calls. I then tried using this https://github.com/Oblarg/cosmosdb-storedprocs-ts repository, which uses webpack to bundle all of node.js into the stored procedure, allowing node modules to be used in stored procedures. Whilst this does allow some node modules to be used, whenever I try and use "https", "fetch", or "axios" modules to make an HTTP GET request I get errors (the same code works fine in a normal node environment, but I'm not a JS expert and can't seem to work past these errors). After a day of attempts it seems like the stored procedure approach is not possible.
Is this the case or is there some way of making HTTP GET requests from a JS stored procedure? If not possible with stored procedures, are there any other techniques to achieve the requirement of reading data from a remote API when querying cosmos DB?
Thanks
There is no way to achieve this from CosmosDB directly, for queries you also cannot use the change feed as the document dont change, so really your only option is to use a function or some preprocessor app to handle it, as you say its not ideal but there is no other solution here. If it was an insert or an update then change feed would allow you to do this but for plain queries its not possible.

Node.js: Is there an advantage to populating page data using Socket.io vs res.render(), or vice-versa?

Let's say, hypothetically, I am working on a website which provides live score updates for sporting fixtures.
A script checks an external API for updates every few seconds. If there is a new update, the information is saved to a database, and then pushed out to the user.
When a new user accesses the website, a script queries the database and populates the page with all the information ingested so far.
I am using socket.io to push live updates. However, when someone is accessing the page for the first time, I have a couple of options:
I could use the existing socket.io infrastructure to populate the page
I could request the information when routing the user, pass it into res.render() as an argument and render the data using, for example, Pug.
In this circumstance, my instinct would be to utilise the existing socket.io infrastructure; purely because it would save me writing additional code. However, I am curious to know whether there are any other reasons for, or against, using either approach. For example, would it be more performant to render the data, initially, using one approach or the other?

How to use Node.js in Postgraphile mutations?

I am making an application that shows information about different users, which is taken from third party API. I save this information in my own format with multiple tables in PostgreSQL to keep track of any changes to the data and provide history of changes (third party API only provides current data).
I want to use GraphQL, specifically Postgraphile to simplify backend development. But there is one use case which I can't find a way to implement with Postgraphile. Here is what I want to implement:
User wants to see an updated information
GraphQL mutation query is sent to the server, something like this:
mutation UpdateUserData($userid: Int) {
updateUser(id: $userid) {
field1,
field2,
relatedObjects {
field3,
filed3
}
}
}
Server makes an API request to third party server, processes data, makes some calculations and updates the database
Requested fields are returned to client
I know that this type of custom mutations can be implemented with database functions like PL/pgSQL or PLV8, but they can't make http requests and I already have most of the logic for data processing in Typescript, so would like to use it.
Is there a way to create a custom mutation that will call JavaScript function which has access to Node.js modules, interfaces and classes that I already created?
One solution that I think will work:
Call REST endpoint on my server, like /update_user?id=$userid
User data is loaded from third party API and database is updated
Client receives response, like Update successful
Normal GraphQL query is called to get the data
Is there a better way of satisfying this use case?
This part is a bit hidden in the documentation, but the easiest way to add mutations written in JavaScript is the makeExtendSchemaPlugin.
Here you can define type definitions in SDL and implement resolvers in JS.

Sail.js - how to structure JSON based live data output with existing static data in the model

In my Angular app, I want to display a table which contains the following
a) URL
b) Social share counts divided by different social networks
Using Sails.js, I already have the api created for the URL when the results show up, I can display the URL now I'm confused how to get the appropriate social counts showing right besides
Here's the API I'm using: https://docs.sharedcount.com/
by itself, I can see the JSON it produces
But here are my questions:
Should I create a new api (model/controller) for social count data or include it in my model where I have the 'url' action defined?
If I create a new api or include the social_counts as an action in the current, what would my JSON query look like? to retrieve the URL's, I'm using default API blueprint that Sails provides, so:
http://www.example.com/url/find?where={"title":{"contains":"mark"}}
Struggling a bit in terms of the thought process, would be great to get input on this
It depends on your app. is your app will store that data or just consume it? If it need to store, of course you need the API. In purpose for modification or aggregating the data for example.
No, you can't do that. That shortcut method only works if you have the data in your database and let the Sails Waterline ORM and Blueprint API served it.
Perhaps, if you only need to consume the data from that Sharedcount API, you didn't need to use Sails as a backend, in this context. Just use Angular as a client of that API. Except if you need to modify the data first and store it in your own database, so Sails will helps with it's Waterline ORM and Blueprint API.

Posting Firebases's thirdpartyuserdata object to the server

I'm using Firebase and the SimpleLogin to allow users to login via Google, Twitter etc.
I'd like to use some of the thirdpartyuserdata object to create a user profile for my application which runs on Node.
Currently I'm posting this data to the server so that I can add to it and create the profile object, but I wondered if there's a better way of doing this - is there something I can call server side to get this thirdpartyuserdata without having to post it from the client?
Start by considering that your "server" is actually just another consumer of Firebase data. Since FirebaseSimpleLogin is simply a token generator with some fancy tools for doing OAuth, and because this happens completely client-side, there is nothing to consume about this.
If you want to consume the data at the server, you will either need to POST it, as you have done, or use Firebase to transfer the information. You'll find that a queue approach can save you a large amount of code, as this allows you to use Firebase as the API, and avoid creating RESTful services in Node, and all the baggage that comes with that.
The idea of a queue is simply that you push data into Firebase at one client and read it out (and probably delete it) at the intended recipient (in this case your node worker).

Resources