Node Streams to Mysql - node.js

I have to parse large csvs approx 1gb, map the header to the database columns, and format every row. I.E the csv has "Gender" Male but my database only accepts enum('M', 'F', 'U').
Since the files are so large I have to use node streams, to transform the file and then use load data infile to upload it all at once.
I would like granular control over the inserts, which load data infile doesn't provide. If a single line has incorrect data the whole upload fails. I am currently using mysqljs, which doesn't provide an api to check if the pool has reached queueLimit and therefore I can't pause the stream reliably.
I am wondering if I can use apache kafka or spark to stream the instructions and it will be added to the database sequentially. I have skimmed through the docs and read some tutorials but none of them show how to connect them to the database. It is mostly consumer/producer examples.
I know there are multiple ways of solving this problem but I am very much interested in a way to seamlessly integrate streams with databases. If streams can work with I.O why not databases? I am pretty sure big companies don't use load data infile or add chunks of data to array repeatedly and insert to database.

Related

AWS Glue for handling incremental data loading of different sources

I am planning to leverage AWG Glue for incremental data processing. Based on hourly schedule a trigger will invoke Glue Crawler and Glue ETL Job which loads incremental data to catalog and processed the incremental files through ETL. And looks pretty straight forward as well. With this I ran into couple of issues.
Let's say we have data getting streamed for various tables and for various data bases to S3 locations, and we want to create data bases and tables based on landing data.
eg: s3://landingbucket/database1/table1/YYYYMMDDHH/some_incremental_files.json
s3://landingbucket/database1/table2/YYYYMMDDHH/some_incremental_files.json
s3://landingbucket/database1/somedata/tablex/YYYYMMDDHH/some_incremental_files.json
s3://landingbucket/database2/table1/YYYYMMDDHH/some_incremental_files.json
s3://landingbucket/datasource_external/data/table1/YYYYMMDDHH/some_incremental_files.json
With the data getting landed in above s3 structure, we want to create glue catalog for these data bases and tables with limited Crawlers. Here we have number of databases as number of crawlers.
Note: We have a crawler for database1, its creating tables under database1, which is good and as expected, but we have an exceptional guy "somedata" in database1, whose structure is not in standard with other tables, with this it created table somedata and with partitions "partitions_0=tablex and partition_1=YYYYMMDDHH". Is there a better way to handle these with less number of crawlers than one crawler per data base.
Glue ETL, we have similar challenge, we want to format the incoming data to standard parquet format, and have one bucket per database and tables will be sitting under that, as the data is huge we don't want one table with partitions as data_base and data. So that we will not getting into s3 slowdown issues for the incoming load. As many teams will be querying the data from this, so we don't want to have s3 slowdown issue coming for their analytics jobs.
Instead of having one ETL job per table, per data base, is there a way we can handle this with limited jobs. As and when new tables are coming, there should be a way the ETL job should transform this json data to formatted zone. So input data and output path both can be handled dynamically, instead of hardcoding.
Open for any better idea!
Thanks,
Krish!

Question about using parquet for time-series data

I'm exploring ways to store a high volume of data from sensors (time series data), in a way that's scalable and cost-effective.
Currently, I'm writing a CSV file for each sensor, partitioned by date, so my filesystem hierarchy looks like this:
client_id/sensor_id/year/month/day.csv
My goal is to be able to perform SQL queries on this data, (typically fetching time ranges for a specific client/sensor, performing aggregations, etc) I've tried loading it to Postgres and timescaledb, but the volume is just too large and the queries are unreasonably slow.
I am now experimenting with using Spark and Parquet files to perform these queries, but I have some questions I haven't been able to answer from my research on this topic, namely:
I am converting this data to parquet files, so I now have something like this:
client_id/sensor_id/year/month/day.parquet
But my concern is that when Spark loads the top folder containing the many Parquet files, the metadata for the rowgroup information is not as optimized as if I used one single parquet file containing all the data, partitioned by client/sensor/year/month/day. Is this true? Or is it the same to have many parquet files or a single partitioned Parquet file? I know that internally the parquet file is stored in a folder hierarchy like the one I am using, but I'm not clear on how that affects the metadata for the file.
The reason I am not able to do this is that I am continuously receiving new data, and from my understanding, I cannot append to a parquet file due to the nature that the footer metadata works. Is this correct? Right now, I simply convert the previous day's data to parquet and create a new file for each sensor of each client.
Thank you.
You can use Structured Streaming with kafka(as you are already using it) for real time processing of your data and store data in parquet format. And, yes you can append data to parquet files. Use SaveMode.Append for that such as
df.write.mode('append').parquet(path)
You can even partition your data on hourly basis.
client/sensor/year/month/day/hour which will further provide you performance improvement while querying.
You can create hour partition based on system time or timestamp column based on type of query you want to run on your data.
You can use watermaking for handling late records if you choose to partition based on timestamp column.
Hope this helps!
I could share my experience and technology stack that being used at AppsFlyer.
We have a lot of data, about 70 billion events per day.
Our time-series data for near-real-time analytics are stored in Druid and Clickhouse. Clickhouse is used to hold real-time data for the last two days; Druid (0.9) wasn't able to manage it. Druid holds the rest of our data, which populated daily via Hadoop.
Druid is a right candidate in case you don't need a row data but pre-aggregated one, on a daily or hourly basis.
I would suggest you let a chance to the Clickhouse, it lacks documentation and examples but works robust and fast.
Also, you might take a look at Apache Hudi.

What would be the proper way to tune Apache Spark for responsive web applications?

I have previously used Apache Spark for streaming applications where it does a wonderful job for ETL pipelines and predictions using Machine Learning.
However, Spark for EDA may not be as fast as one may want. For example, if you would like to do basic mathematical operations on data coming from Postgres or ElasticSearch using the data frames in Spark, the time it takes to fetch data from the host system and do the analysis is much higher than that taken by the SQL query on Postgres to run.
Even simple aggregations such as sum, average, and count can be done much faster using SQL than doing them on top of Spark-SQL.
From what I understand, this is not because of latency in fetching the data from the host system. If you call the show method on a data frame, you can quickly get the top rows of the data set. However, if you limit the response in SQL, and then call collect the time taken is huge.
This means that the data is there, but the processing being done while calling collect is taking a time.
Regardless of the data source (CSV file, JSON file, ElasticSearch, Parquet, etc.), the behavior remains the same.
What is the reason for this latency on collect and is there any way to reduce it to the point where it can work with responsive applications to make real-time or near real-time queries?

options for slow cassandra write

I'm ingesting very large files into Cassandra 2.0 and I'm noticing that my ingest rate into Cassandra will be x3 slower than the rate at which I'm getting new files to ingest. Given that, and trying to avoid memory problems, what are my options for keeping up with ingest?
I was initially thinking that I could have multiple clients writing, possibly each to a different "seed" node in the cluster. If I am careful about not accessing the same file twice will that cause problems with the node I/O? What is the best way to go about doing this? Based on google searches I have seen things like batch driver statements can help, but I'm reading in CSV files which need to be cleaned first...

Cannot find efficient method for dumping leveldb to flat file(s)

I'm using LevelDB as part of a local process that, when all is done, has ~10-100 million JSON entries.
I need to get these into a portable format, ideally as one or more csv (or even line delimited json) files to import into a separate mongodb system.
I did a quick test in node.js to stream the db contents to a file (using node-levelup, and on my machine it took about 18.5 minutes for 10 million pairs. Seems pretty slow.
Looking for suggestions on quicker dump/export from leveldb.
I've considered using mongodb as the store for the local processing, because mongoexport is much quicker, but there is a lot more overhead in setup as I'd need several shards to get more speed on my writes.
The fastest way to retrieve all entries in leveldb is using it's iterator which may node-levelup already did for that.
Since you are still need a tool to parse the exported file, I suggest you just copy leveldb's data dir as the exported file. You can open it and iterate it in python/ruby/..., just any script which have a leveldb wrapper.

Resources