I need a local DB on a pi zero, with multiple processes running that need to write and read data. That kind of rules SQLite out (I think). From my experience SQLite only allows one connection at a time and is tricky with multiple processes trying to do database work. All of my data transmission would be JSON driven so NOSQL makes sense but I need something light weight to store a few configs and to store data that will synced up to the server. But what NOSQL options would be best to run on a pi with great NODE support?
SQLite is generally fine when using it with multiple concurrent processes. From the SQLite FAQ:
We are aware of no other embedded SQL database engine that supports as much concurrency as SQLite. SQLite allows multiple processes to have the database file open at once, and for multiple processes to read the database at once. When any process wants to write, it must lock the entire database file for the duration of its update. But that normally only takes a few milliseconds. Other processes just wait on the writer to finish then continue about their business. Other embedded SQL database engines typically only allow a single process to connect to the database at once.
For the majority of applications, that should be fine. If only one of your processes is doing writes, and the other only reads, it should have no impact at all.
If you're looking for something that's NoSQL-specific, you can also consider LevelDB, which is used in Google Chrome. With Node, the best way to access it is through the levelup library.
Related
I'm practicing NodeJS and working on a small application that has an endpoint that stores json objects to text files and also has the ability to search for objects in files. I'm using a single file for that purpose, I know I should be using multiple files for many considerations, but lets consider the case of having a single storage text file.
As far as I understand, (Not 100% sure) that NodeJS is single threaded, and it is not possible that two simultaneous API calls would update the file at the same time since each process of them will be in a single separate thread.
So is my understanding correct? or there is a possiblitiy that a file gets updated simultaneously which will cause data integrity violation? and if yes, how to handle this ? Is there a way to lock a file until the process completes ?
Yes, it can happen. While NodeJS is single-threaded, I/O is not. That is one of its core tenets.
You could mitigate such problems by locking files before writing to them (the link is just one example how to do this).
Another approach would be to use SQLite. There's no database server set up or administer (like MySQL, for example). The entire database is contained in a single file, but it handles things such as locking in case of multiple writes, crashes while writing and so on.
"Think of SQLite not as a replacement for Oracle [a database server] but as a replacement for fopen() [working with plain files]"
With node-postgres npm package, I'm given two connection options: with using Client or with using Pool.
What would be the benefit of using a Pool instead of a Client, what problem will it solve for me in the context of using node.js, which is a) async, and b) won't die and disconnect from Postgres after every HTTP request (as PHP would do, for example).
What would be the technicalities of using a single instance of Client vs using a Pool from within a single container running a node.js server? (e.g. Next.js, or Express, or whatever).
My understanding is that with server-side languages like PHP (classic sync php), Pool would benefit me by saving time on multiple re-connections. But a Node.js server connects once and maintains an open connection to Postgres, so why would I want to use a Pool?
PostgreSQL's architecture is specifically built for pooling. Its developers decided that forking a process for each connection to the database was the safest choice and this hasn't been changed since the start.
Modern middleware that sits between the client and the database (in your case node-postgres) opens and closes virtual connections while administering the "physical" connection to the Postgres database can be held as efficient as possible.
This means connection time can be reduced a lot, as closed connections are not really closed, but returned to a pool, and opening a new connection returns the same physical connection back to the pool after use, reducing the actual forking going on the database side.
Node-postgres themselves write about the pros on their website, and they recommend you always use pooling:
Connecting a new client to the PostgreSQL server requires a handshake
which can take 20-30 milliseconds. During this time passwords are
negotiated, SSL may be established, and configuration information is
shared with the client & server. Incurring this cost every time we
want to execute a query would substantially slow down our application.
The PostgreSQL server can only handle a limited number of clients at a
time. Depending on the available memory of your PostgreSQL server you
may even crash the server if you connect an unbounded number of
clients. note: I have crashed a large production PostgreSQL server
instance in RDS by opening new clients and never disconnecting them in
a python application long ago. It was not fun.
PostgreSQL can only process one query at a time on a single connected
client in a first-in first-out manner. If your multi-tenant web
application is using only a single connected client all queries among
all simultaneous requests will be pipelined and executed serially, one
after the other. No good!
https://node-postgres.com/features/pooling
I think it was clearly expressed in this snippet.
"But a Node.js server connects once and maintains an open connection to Postgres, so why would I want to use a Pool?"
Yes, but the number of simultaneous connections to the database itself is limited, and when too many browsers try to connect at the same time, the database's handling of it is not elegant. A pool can better mitigate this by virtualizing and outsourcing from the database itself the queuing and error handling that no databases are specialized in.
"What exactly is not elegant and how is it more elegant with pooling?"
A database stops responding, a connection times out, without any feedback to the end user (and even often with few clues to the server admin). The database is dependent on hardware to a higher extent than a javascript program. The risk of failure is higher. Those are my main "not elegant" arguments.
Pooling is better because:
a) As node-postgres wrote in my link above: "Incurring the cost of a db handshake every time we want to execute a query would substantially slow down our application."
b) Postgres can only process one query at a time on a single connected client (which is what Node would do without the pool) in a first-in first-out manner. All queries among all simultaneous requests will be pipelined and executed serially, one after the other. Recipe for disaster.
c) A node-based pooling component is in my opinion a better interface for enhancements, like request queuing, logging and error handling compared to a single-threaded connection.
Background:
According to Postgres themselves pooling IS needed, but deliberately not built into Postgres itself. They write:
"If you look at any graph of PostgreSQL performance with number of connections on the x axis and tps on the y access (with nothing else changing), you will see performance climb as connections rise until you hit saturation, and then you have a "knee" after which performance falls off. A lot of work has been done for version 9.2 to push that knee to the right and make the fall-off more gradual, but the issue is intrinsic -- without a built-in connection pool or at least an admission control policy, the knee and subsequent performance degradation will always be there.
The decision not to include a connection pooler inside the PostgreSQL server itself has been taken deliberately and with good reason:
In many cases you will get better performance if the connection pooler is running on a separate machine;
There is no single "right" pooling design for all needs, and having pooling outside the core server maintains flexibility;
You can get improved functionality by incorporating a connection pool into client-side software; and finally
Some client side software (like Java EE / JPA / Hibernate) always pools connections, so built-in pooling in PostgreSQL would then be wasteful duplication.
Many frameworks do the pooling in a process running on the the database server machine (to minimize latency effects from the database protocol) and accept high-level requests to run a certain function with a given set of parameters, with the entire function running as a single database transaction. This ensures that network latency or connection failures can't cause a transaction to hang while waiting for something from the network, and provides a simple way to retry any database transaction which rolls back with a serialization failure (SQLSTATE 40001 or 40P01).
Since a pooler built in to the database engine would be inferior (for the above reasons), the community has decided not to go that route."
And continue with their top reasons for performance failure with many connections to Postgres:
Disk contention. If you need to go to disk for random access (ie your data isn't cached in RAM), a large number of connections can tend to force more tables and indexes to be accessed at the same time, causing heavier seeking all over the disk. Seeking on rotating disks is massively slower than sequential access so the resulting "thrashing" can slow systems that use traditional hard drives down a lot.
RAM usage. The work_mem setting can have a big impact on performance. If it is too small, hash tables and sorts spill to disk, bitmap heap scans become "lossy", requiring more work on each page access, etc. So you want it to be big. But work_mem RAM can be allocated for each node of a query on each connection, all at the same time. So a big work_mem with a large number of connections can cause a lot of the OS cache to be periodically discarded, forcing more accesses to disk; or it could even put the system into swapping. So the more connections you have, the more you need to make a choice between slow plans and trashing cache/swapping.
Lock contention. This happens at various levels: spinlocks, LW locks, and all the locks that show up in pg_locks. As more processes compete for the spinlocks (which protect LW locks acquisition and release, which in turn protect the heavyweight and predicate lock acquisition and release) they account for a high percentage of CPU time used.
Context switches. The processor is interrupted from working on one query and has to switch to another, which involves saving state and restoring state. While the core is busy swapping states it is not doing any useful work on any query. Context switches are much cheaper than they used to be with modern CPUs and system call interfaces but are still far from free.
Cache line contention. One query is likely to be working on a particular area of RAM, and the query taking its place is likely to be working on a different area; causing data cached on the CPU chip to be discarded, only to need to be reloaded to continue the other query. Besides that the various processes will be grabbing control of cache lines from each other, causing stalls. (Humorous note, in one oprofile run of a heavily contended load, 10% of CPU time was attributed to a 1-byte noop; analysis showed that it was because it needed to wait on a cache line for the following machine code operation.)
General scaling. Some internal structures allocated based on max_connections scale at O(N^2) or O(N*log(N)). Some types of overhead which are negligible at a lower number of connections can become significant with a large number of connections.
Source
I can't seem to get this concept right in my head. If I have a website that gets 1 million concurrent users, without any databases at all, will I need to scale? I'm Using Node.js and Socket.IO. Also is there a way I could simulate something like this on my localhost?
Having one million user, or connections, on Socke.io, doesn't mean you have to scale, but depending on what they are doing, you would probably do. Having a data base adds storage but has nothing more to do with the need for scaling the Node.JS server.
You can create a test to try to insert as much as you want using a loop to connect and then try to emit an event for each of then.
For scaling node you can use a cluster. A single instance of Node.js runs in a single thread. To take advantage of multi-core systems, the user will sometimes want to launch a cluster of Node.js processes to handle the load. https://nodejs.org/api/cluster.html#cluster_cluster
To simulate high load, there are open source tools you can use for free: http://www.opensourcetesting.org/category/performance/
I am currently developing a real-time app with rethinkdb and node, and there are many different rethinkdb queries to run in different classes. So, my question is, does it make more sense to have a single rethinkdb connection which every query must open and close, or a single connection where every query is run, statically available?
From this issue I deduce that parallelization is already an option, so this is a matter of what is more efficient.
It's best to have a pool of open connections to your RethinkDB server. For example rethinkdbdash (which I recommend you use) opens a pool of 50 connections that are available for your queries.
I am a novice programmer in Node JS. I have a few queries regarding process related issues like locking and race conditions in Node JS and Mongo DB.
My codes are working perfectly in local environment,but when I am moving to production and come across large number of requests,I might encounter certain issues.
How do we avoid write level race conditions for mongo slaves located in different regions? ie say one piece of data is being written locally but the true value for it is being written remotely that is delayed
Consider we have node processes located regionally would it need to hit mongo master located in another region which then routes the request to a regional slave? This considerably increases the latency of each write - how do we avoid this? Can we have direct writes to regional slaves from local processes and some kind of replication to maintain data consistency?
I use a Node REST api and use mongoose as the Mongo DB driver.Any help would be deeply appreciated .Thank you .
MongoDB's automatic failover and high availability features are provided by what's called replication. The standard MongoDB terms are "primary" for master and "secondary" for slave, so I'll use those terms to be consistent with the documentation and the user base at large. I think both of your questions are answered by one fact: in a replica set, the primary is the only member that accepts writes from clients, ever. The secondaries get the data replicated to them asynchronously a short time later. To answer the questions directly:
No writes to slaves except internal replication of writes from the primary, so no "race condition" with writes can arise.
All writes must go to the primary. The replication system will distribute to data to the secondaries asynchronously. You can read from secondaries, but it isn't a best practice despite its occasional utility. I'd suggest reading about replica set read preference and reading Asya Kamsky's blog post about scaling with replica sets before deciding to read from secondaries.