Is correct to call different api in the same moment? - node.js

I'm building my first big application using React. I built with Express
a Restful API to get and update data on a MySQL DB.
There is a moment when I need to get data from different tables in the same moment and populate different tables on the same page.
Is it correct to make multiple calls at the same moment? Can this cause problems?
Is there a better way to do this, like creating a dedicated API to get all the data?
I apologize in advance if there is something wrong or I was unclear

Yes you could do that, but only if you are doing something ( changing Data or selecting Data ) from another Table example: Table1 apicall1 and Table2 apicall2.
If its from the same table its probably gonna screw something up if you change something big!

Related

Mongodb change streams getting previous values?

Recently I learned about change streams in mongodb and how powerful they are, but I’m in need of getting the previous values before an update. However, I seem to have learned through some research that it’s impossible to get the previous values? So this got me thinking about what alternatives exist in order to retrieve those previous values?
What I want to achieve is a logging system such as, “Record A field has changed from {old_value} to {new_value}.” I’m using socket.io to push these updates to a react front-end client. The updates to records would be happening from a completely different system and not on the same blackened server where the change streams would be listening so I won’t be able to query the document before updating.
So I started to think of a different solution…maybe I could have two databases? One contains the old records and the other the updated records but this sounds like a duplication of data. And I can’t imagine having thousands of records.
I need some guidance as I really don’t know what the best option is? Is there really no way you can use change streams and get the previous values? Is it possible to somehow query the document before a change stream event? Thank you.
Not sure how I missed this but the solution is versioning the data.

How can I store data for compiling leaderboards?

Firslty I'd like to note that I currently only using 1 node because it handles my current needs just fine. And also, I'm using nodejs for all of this.
Now, here's my issue. There's a cassandra table "playerdata" which is storing millions of player's data in a video game.
I want to compile leaderboards, and it's clear to me I won't be able to do so via this table.
I need to retrieve everybody's data, then loop through in varying ways and compile various leaderboards.
However, is there another method which is good on performance?
My first thought was maps, but then I realized there is a limit to these.
Would the best option be to switch to a SQL database for leaderboards?
You can use Cassandra support of order by clause. but order by is supported only for cluster key. so if you want to create leader board for example base on game points, you can create schema like - ((gameId), userId, Points).

Possible to continue using AutoNumber Type? (with Access frontend & SQL Server backend)

I feel like this has to be somewhat of a common question, however I cannot seem to find the answer via search, so here goes:
We just moved all of our tables to a SQL Server backend and are looking to continue using MS Access as a frontend because of the friendly UI. We do, however, have a couple of tables that use the AutoNumber type which I have learned SQL Server simply converted to the "bigint" data type.
Now, I've already figured out how to make views that utilize the "CAST" function to convert away from "bigint" when I am linking my tables from the backend (this is so that MS ACCESS can read the tables rather than giving me #deleted values), however this leaves my tables of course with no AutoNumber option still. Is there a solution to this? I'd really like to continue using an auto incremental column to be used as a PK for our purposes with these tables.
Thanks for any/all help. It's greatly appreciated
Setting Identity to true with an int type is the same as autonumber.

Access MDB database. Linux: how to get a very odd pattern from the DB?

I'm in a VERY difficult problem.
I have a Microsoft Access Data Base, but it was made in the most chaotic way possible. The DB has like 150+ tables, Only uses like 50% of the tables. The relations are almost random. But, somehow, it delivers some information.
I need to get a particular component of the DB, but is so tangled that I can not manage to get into the table that creates that value. I revised every table, one by one, and found nothing.
I used mdbtools for Linux to try to inspect with more details the DB. But unfortunately has not been developed in years, and it closes every time. Maybe because the DB is "big" ? -700 mg-
I'm wondering: is there a way to see all the relations the arrives to the particular value I'm looking? Or to decompile the DB? I have no idea in which language it was made. I'm suspecting that it was made in Visual, just because is rather crappy.
Well, waiting for some help.
I would suggest using (still) MS Access for this. But, if relationships look messy on the diagram, you can query one of the system tables (MSysRelationships) directly to get ALL the relationships you need (e.g. for particular table etc.):
To unhide system tables in early versions of Access (97-2003), follow the instructions here:
For Access 2007, do the following:

Importing CSV to SQLITE

I am just starting a new spreadsheet of recipes (in Google Docs) that I will eventually be importing into a SQLite database.
My question is, how can I best input the data into this spreadsheet so that it can be readily imported into SQLite when I'm finished?
My main concern is that many of the recipe fields (list of ingredients, list of directions) are obviously going to have many separate lines per field (i.e. one cell of the spreadsheet will have multiple lines of information in it).
Can anyone suggest the best way to enter these newlines into the spreadsheet so that it will correctly import when I'm done?
I will be manually loading the csv into the database and then using it as a static database in an iOS app. This will be a one-time loading of data, so I'm open to importing it using whichever method would be easiest.
Any thoughts or assistance would be greatly appreciated. Thank you in advance!
It all depends on the end result you're trying to achieve.
The basic idea is to break the recipe down into simpler pieces, for example "ingredients" and "steps". You can go the full normalization route to gain maximum query-ability/extensibility if you want. The idea is that you store the data in broken down form and "reconstitute" it later with SELECT queries.
The approach you seem to be thinking of - putting a "list" of values in one field - isn't how relational databases like SQLite are designed to work.
I don't know what you're trying to do in the end, but perhaps a document-oriented database like CouchDB or MongoDB would be more suitable?

Resources