How to upload to a few tables in bigquery using gcloud - node.js

Is there a way to upload different data to a few tables in one load job to bigquery using nodejs gcloud library or using bq command line?

No. It's one load job per table.
https://cloud.google.com/bigquery/loading-data
If you're feeling adventurous, you could write a Dataflow pipeline which reads from multiple sources and writes to multiple sinks in BigQuery.
https://cloud.google.com/dataflow/model/bigquery-io

Related

Pyspark jobs on dataproc using documents from Firestore

I need to run some simple Pyspark jobs on Big Data that are stored in Google's Firestore.
The dataset contains 42 million documents regarding Instagram posts. I want to do some simple aggregations like summing the number of likes per country (location).
However i am new to Big Data processing and i have no idea on how to import the data to the Dataproc cluster to do the processing.
Should i export all the data into a GCS bucket and then load them to the VMs on the cluster?
Or i should connect the VMs to firebase when i need to do the processing?
Also, since Spark distributes the data into RDDs, is it possible to split (parallelize) the data directly from firebase so that each worker gets a chunk without having to load all 42m documents and then do the split?
The data should be around 15GB so i must consider which is the cheaper option as well.

is there any better approach to sync data from Bigquery to singlestore throgh pipelines?

I have data in the Bigquery table and wanted to sync it to singlestore table. I can see the singlestore pipeline documentation here https://docs.singlestore.com/db/v7.8/en/reference/sql-reference/pipelines-commands/create-pipeline.html. it has options to use GCS to load data from. it seems like it expects files from google cloud. I am new to singlestore, can somebody suggest a better approach. should I use pipelines or not? I have created a query stream from Bigquery and now want to insert data to singlestore DB in Nodejs. can we use write stream to singlestore? can we use the pipeline to insert records via the above stream from BQ?
The most efficient way to perform batch data movement from BigQuery to SingleStoreDB would be to perform exports of the data to GCS and use Pipelines to pull the data into SingleStoreDB. Pipelines are optimized for loading data into SingleStoreDB in parallel. If you export the data in Avro format, it will be even more efficient on both sides. It will likely be less complex and more efficient than trying to build the same workflow in Node.js.

Custom processing of multiple json event streams with spark/databricks

I have multiple (hundreds) of event streams, each persisted as multiple blobs in azure blob storage, each encoded as multi-line json, and I need to perform an analysis on these streams.
For the analysis I need to "replay" them, which basically is a giant reduce operation per stream using a big custom function, that is not commutative. Since other departments are using databricks, I thought I could parallelize the tasks with it.
My main question: Is spark/databricks a suitable tool for the job and if so, how would you approach it?
I am completely new to spark, but I am currently reading up on spark using the "Complete Guide" and the "Learning Spark 2.ed" and I have trouble to answer that question myself.
As far as I see, most of the dataset / Spark SQL is not suitable for this task? Can I just inject custom code in a spark application that is not using these APIs and how do I control how the tasks get distributed afterwards?
Can I read in all blob names, partition them by stream and then generate tasks that read in all blobs in a partition and just feed them into my function without spark trying to be clever in the background?

Read a Databricks table via Databricks api in Python?

Using Python-3, I am trying to compare an Excel (xlsx) sheet to an identical spark table in Databricks. I want to avoid doing the compare in Databricks. So I am looking for a way to read the spark table via the Databricks api. Is this possible? How can I go on to read a table: DB.TableName?
There is no way to read the table from the DB API as far as I am aware unless you run it as a job as LaTreb already mentioned. However, if you really wanted to, you could use either the ODBC or JDBC drivers to get the data through your databricks cluster.
Information on how to set this up can be found here.
Once you have the DSN set up you can use pyodbc to connect to databricks and run a query. At this time the ODBC driver will only allow you to run Spark-SQL commands.
All that being said, it will probably still be easier to just load the data into Databricks, unless you have some sort of security concern.
I can recomend you write pyspark code in notebook, call the notebook from previously defined job, and establish connection between your local machine and databricks workspace.
You could perfom comaprision directly on spark or convert data frames to pandas if you wish. If noteebok will end comaprision, could retrun result from particular job. I think that sending all databricks tables could be impossible because of API limitation you have spark cluster to perform complex operation, API should be use to send small messages.
Officical documentation:
https://learn.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/jobs#--runs-get-output
Retrieve the output and metadata of a run. When a notebook task
returns a value through the dbutils.notebook.exit() call, you can use
this endpoint to retrieve that value. Azure Databricks restricts this
API to return the first 5 MB of the output. For returning a larger
result, you can store job results in a cloud storage service.

AWS Data Lake Ingest

Do you need to ingest excel and other proprietary formats using glue or allow glue to work crawl your s3 bucket to use these data formats within your data lake?
I have gone through the "Data Lake Foundation on the AWS Cloud" document and am left scratching my head about getting data into the lake. I have a Data Provider with a large set of data stored on their system as excel and access files.
Based on the process flow they would upload the data into the submission s3 bucket, which would set off a series of actions, but there is no etl of the data into a format that would work with the other tools.
Would using these files require using glue on the data that is submitted in the bucket or is there another way to make this data available to other tools such as Athena and redshift spectrum?
Thank you for any light you can shed on this topic.
-Guido
I'm not seeing that can take excel data directly to Data Lake. You might need to convert into CSV/TSV/Json or other formats before loading into Data Lake.
Formats Supported by Redshift Spectrum:
http://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-data-files.html -- Again I don't see Excel as of now.
Athena Supported File Formats:
http://docs.aws.amazon.com/athena/latest/ug/supported-formats.html -- I don't see Excel also not supported here.
You need to upload the files to S3 either to Use Athena or Redshift Spectrum or even Redshift storage itself.
Uploading Files to S3:
If you have bigger files, you need to use S3 multipart upload to upload quicker. If you want more speed, you need to use S3 accelerator to upload your files.
Querying Big Data with Athena:
You can create external tables with Athena from S3 locations. Once you create external tables, use Athena Sql reference to query your data.
http://docs.aws.amazon.com/athena/latest/ug/language-reference.html
Querying Big Data with Redshift Spectrum:
Similar to Athena, you can create external tables with Redshift. Start querying those tables and get the results on Redshift.
Redshift has lot of commercial tools, I use SQL Workbench. It is free open source and rock solid, supported by AWS.
SQL WorkBench: http://www.sql-workbench.net/
Connecting your WorkBench to Redshift: http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-using-workbench.html
Copying data to Redshift:
Also if you want to take the data storage to Redshift, you can use the copy command to pull the data from S3 and its gets loaded to Redshift.
Copy Command Examples:
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html
Redshift Cluster Size and Number of Nodes:
Before creating Redshift Cluster, check for required size and number of nodes needed. More number of nodes gets query parallely running. One more important factor is how well your data is distributed. (Distribution key and Sort keys)
I have a very good experience with Redshift, getting up to the speed might take sometime.
Hope it helps.

Resources