Fastest way to process all of the data in a Postgres table? - node.js

I have a NodeJS application that needs to stream data from an RDS Postgres, perform some relatively expensive CPU operations on the data, and insert it into another database. The CPU intensive portion I've offloaded into an AWS Lambda, such that the Node application will get a batch of rows and immediately pass them to the Lambda for processing. The bottleneck appears to be the speed in which the data can be received from Postgres.
In order to utilize multiple connections to the DB, I have an algorithm which is effectively leapfrogging on sorted IDs, so that many concurrent connections can be maintained. Ex: 1 connection fetches ids 1-100, second one fetches ids 101-200, etc, and then when the first returns maybe it fetches ids 1001-1100. Is this relatively standard practice? Is there a faster method for pulling the data out for processing?
So long as I am below the database's max_connections, would it be arguably beneficial to add more, possibly as additional concurrent applications streaming data out of it? Both the application and the RDS are currently in the VPC, and the CPU utilization on the RDS gets to about 30%, with memory at 60%.

It would likely be MUCH faster to dump your Postgres database into a CSV file or export it directly to flat files, dump the flat files into S3 after splitting them up, then have workers process each batch of files on their own.
Streaming data out of Postgres (particularly if you're doing it for millions of items) will take a LOT of IO and a very long time.

Related

Parallel read and write to postgres database slows down application (backend)

I have a backend in nestjs using typeorm and postgres. This backend saves and reads data frequently from the database. In this database we are dealing with row counts of 10k + at times that needs to get updated and saved or created.
In this particular case where I need some brain juice I have a table (lets call it table a)
the backend fetches data from table a every few seconds
the content in table A needs to get updated frequently (properties and values overwritten). I am doing this updating task from a several application backend solely for this use-case.
Example case
Table A holds 100K records
update-service splits these 100K records into chunks of 5 and parallell updates 25K records each. While doing so, the main application that retrieves data from the backend slows down.
What is the best way to have performant read and write in parallel? I am assuming the slow down comes from locks (main backend retrieves data while update service tries to update) but I am not sure as I have not that much experience working with databases.
Don't assume, assert.
While you experiencing bad performance, check how the operating system's resources are doing; in this case, mostly CPU and disk. If one of them is maxed out, you know what is going on, and you either have to reduce the degree of parallelism or make the system stronger.
It is also interesting to look at wait events in PostgreSQL:
SELECT wait_event_type, wait_event, count(*)
FROM pg_stat_activity
WHERE state = 'active'
GROUP BY wait_event_type, wait_event;
That will show I/O related events if you are running out of disk bandwidth, but it will also show database-internal contention that you can potentially hit with very high degrees of parallelism.

Slow bulk insert to Azure database

We are running an elastic pool in Azure running multiple databases, when running 1 of our larger imports this seems to take longer than we are used to. During these imports we ran at 6 cores as a test. All databases are allowed to use all cores.
On our local enviroment, it inserts about 100k records per second, however, the same dataset on Azure does about 1k per second (our vm) to 4k per second (dev laptop).
During this insert, the database only uses 14% log IO, 5% CPU and 0% DataIO.
When setting up a new database using DTU model in P2 we are noticing the same experience. So we are not even hitting the limits of the database
The table contains about 36 columns which are all required.
We have tried this using BulkInsert in the following way using different batchsizes
BulkConfig b = new BulkConfig();
b.BatchSize = 100000;
await dbcontext.BulkInsertAsync(entities, b);
As well as using standard EntityFramework addranges using smaller batches. We even went as far as using the manually written SqlBulkCopy methods, however all with no dice.
Now the question is mainly, is this a software issue? Are we running into issues in our AzureDB? Do we need to change the way we do Bulk imports?
Edit:
Attempted to run the import using the TempDB Setting in BulkInsert, however this also does not increase performance. LogIO is still at 14%.
Iterate through the dataset on the application layer, invoking a
stored procedure for each row that will perform an INSERT/UPDATE
action based on the existence of a record with a certain key. If the
number of records to upsert is limited, this strategy may work well;
otherwise, roundtrips and log writes will have a major influence on
speed.
To minimise roundtrips and log writes and increase throughput, use
bulk insert approaches like the SqlBulkCopy class in ADO.NET to
upload the full dataset to Azure SQL Database and then execute all
the INSERT/UPDATE (or MERGE) operations in a single batch. Overall
execution times may be reduced from hours to minutes/seconds using
this method.
Here, is a discussion related to same scenario: Optimize Azure SQL Database Bulk Upsert scenarios - link.

Mongodb response slows down incredibly after 200,000 records

Currently our task is to fetch 1 million records from an external server, process it and save it in the db. We are using node.js for fetching the records and mongodb as the database.
We decided to split the process into 2 tasks, fetching the records and processing it. Now we are able to fetch all the records and dump it in mongo but when we are trying to process it(by processing I mean change a few attribute values, do some simple calculation and update the attributes), we see drastically slow response in mongodb updates around 200,000 records.
For processing the data, we take batches of 1000 records process it, update the records( individually) and then go for the next batch. How could the performance be made better?
if you want to maintain response speed in mongoDB after long data then use mongo sharding and replication in your database and collection
replication:-
A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments. This section introduces replication in MongoDB as well as the components and architecture of replica sets. The section also provides tutorials for common tasks related to replica sets.
Replication Link
sharding:-
Sharding is the process of storing data records across multiple machines and is MongoDB’s approach to meeting the demands of data growth. As the size of the data increases, a single machine may not be sufficient to store the data nor provide an acceptable read and write throughput. Sharding solves the problem with horizontal scaling. With sharding, you add more machines to support data growth and the demands of read and write operations.
Sharding Link

HIVE/HDFS for realtime storage of sensor data on a massive scale?

I am evaluating sensor data collection systems with the following requirements,
1 million endpoints sending in 100 bytes of data every minute (as a time series).
Basically millions of small writes to the storage.
This data is write-once, so basically it never gets updated.
Access requirements
a. Full data for a user needs to be accessed periodically (less frequent)
b. Partial data for a user needs to be access periodically (more frequent). For e.g I need sensor data collected over the last hour/day/week/month for analysis/reporting.
Have started looking at Hive/HDFS as an option. Can someone comments on the applicability of Hive in such a use case? I am concerned that while the distributed storage needs would work, it seems more suited to data warehousing applications than real time data collection/storage.
Do HBase/Cassandra make more sense in this scenario?
I think HBase can be a good option for you. In fact there's already an open/source implementation in HBase which solves similar problem that you might want to use. Take a look at openTSB which is an open source implementation for solving similar problems. Here's a short excerpt from their blurb:
OpenTSDB is a distributed, scalable Time Series Database (TSDB)
written on top of HBase. OpenTSDB was written to address a common
need: store, index and serve metrics collected from computer systems
(network gear, operating systems, applications) at a large scale, and
make this data easily accessible and graphable. Thanks to HBase's
scalability, OpenTSDB allows you to collect many thousands of metrics
from thousands of hosts and applications, at a high rate (every few
seconds). OpenTSDB will never delete or downsample data and can easily
store billions of data points. As a matter of fact, StumbleUpon uses
it to keep track of hundred of thousands of time series and collects
over 600 million data points per day in their main production
datacenter.
There are actually quite a few people collecting sensor data in a time-series fashion with Cassandra. It's a very good fit. I recommend you read this article on basic time series in Cassandra for an idea of what your data model would be like.
Writes in Cassandra are extremely cheap, so even a moderately sized cluster could easily handle one million writes per minute.
Both of your read queries could be answered very efficiently. For the second type of query, where you're reading data for a slice of time for a single sensor, you would end up reading a contiguous slice from a single row; this should take about 10ms for a completely cold read. For the first type of query, you would simply be running several of the per-sensor queries in parallel. Assuming you store a basic map of users to sensor IDs, you would lookup all of the sensor IDs for a user with one query, and then your second query would fetch the data for all of those sensors (although you might break up this query if the number of sensors is high).
Hive and HDFS don't really make sense when you're talking about real-time queries, as they're more suited for long-running batch jobs.

Multi-threaded database read access

I have a process(c++ code) that reads and writes from database (Oracle).
But it takes long time for process to finish.
I was thinking of creating partitions in the tables that this process queries.
And then making the process multi-threaded so that each thread(one for each partition) can read/write the data in parallel.
I will be creating a DB connection per thread.
Will write slow it down?
Will this work?
Is there any other way of improving performance (all queries are tuned and optimized already)?
Thanks,
Nikhil
If the current bottleneck is writing the data to the database then creating more threads to write more data may or may not help, depending on how the data is partitioned, and whether or not the writes can occur concurrently, or whether they interfere with each other (either at the database lock level, or at the database disk IO level).
Creating more threads will instead allow the application to process more data, and queue it up for writing to the database, assuming that there is sufficient hardware concurrency (e.g. on a multicore machine) to handle the additional threads.
Partitioning may improve the database performance, as may changing the indexes on the relevant tables. If you can put separate partitions on separate physical disks then that can improve IO when only one partition needs to be accessed by a given SQL statement.
Dropping indexes that aren't needed, changing the order of index columns to match the queries, and even changing the index type can also improve performance.
As with everything: profile it before and after every proposed change.

Resources