Use a database stored in S3 to load an SQLite database - node.js

I am wondering if I can load a sqlite3 database stored in an S3 bucket, I would rather not download the file but the more straightforward solution I can think of would be to download the file to a /temp folder and then load the database. Is this the best approach?

Related

Loading Sharepoint files into Bigquery

I want to create a BigQuery table out of some files that are present in our organizations secured space of sharepoint.
The files will be added on weekly basis and I need to setup a pipeline that I can use to ingest data in bigquery. I have been following manual process of loading the files by downloading the files and uploading to gcs bucket but that doesnt seems feasible anymore.
Any help will be appreciated.

Append files to existing S3 bucket folder via Spark

I am working in Spark where we need to write the data to S3 bucket after performing some tranformations. I know that while writing dtaa to HDFS/S3 via Spark throws an exception if the folder path already exists. So in our case if S3://bucket_name/folder already exists while writing the data to the same S3 bucket path, it will throw an exception.
Now the possible solution is to use mode as OVERWRITE while writing through Spark. But that would delete all the files already present in it. I want to have a kind of APPEND functionality with the same folder. So if folder already has some files, then it would just add more files to it.
I am not sure if API out of the box gives any such functionality. Of course there is an option where I can create a temporary folder inside a folder and save the file. After that I can move that file to its parent folder and delete the temporary folder. But this kind of approach is not best.
So please suggest how to proceed with this.

Treating MongoDB Database as a SQL Database File (.mdf & .db Files)

I'm new to MongoDB. I'm trying to create a Node.js API that uses MongoDB and deploy it using Firebase Functions. In SQL there was a .mdf file or .db file holding the database. My Question is, is there a file like that in MongoDB i can grab and deploy with my API? or how can i deploy the database I've been working on locally on my PC with my Node.js API to the server i'm hosting on which in this context Firebase functions?
Thank you in advance. and sorry if my question isn't so clear.
There isn't a single file, there is a directory of them. You'll want to copy all of the files in the dbpath, except the diagnostic.data and any log files.

How to download all the files from S3 bucket irrespective of file key using python

I am working on an automation piece where I need to download all files from a folder inside a S3 bucket irrespective of the file name. I understand that the using boto3 in python I can download a file like:
s3BucketObj = boto3.client('s3', region_name=awsRegion, aws_access_key_id=s3AccessKey, aws_secret_access_key=s3SecretKey)
s3BucketObj.download_file(bucketName, "abc.json", "/tmp/abc.json")
but I was then trying to download all files irrespective of what filename to be specified in this way:
s3BucketObj.download_file(bucketName, "test/*.json", "/test/")
I know the syntax above could be totally wrong but is there a simple way to do that?
I did find a thread which helps here but seems a bit complex: Boto3 to download all files from a S3 Bucket
There is no API call to Amazon S3 that can download multiple files.
The easiest way is to use the AWS Command-Line Interface (CLI), which has aws s3 cp --recursive and aws s3 sync commands. It will do everything for you.
If you choose to program it yourself, then Boto3 to download all files from a S3 Bucket is a good way to do it. This is because you need to do several things:
Loop through every object (there is no S3 API to copy multiple files)
Create a local directory if it doesn't exist
Download the object to the appropriate local directory
The task can be made simpler if you do not wish to reproduce the directory structure (eg if all objects are in the same path). In that case, you can simply loop through the objects and download each of them to the same directory.

how to load local file into Azure SQL DB

I have not been able to find a solution to this so will ask the experts.
A co-worker has a .txt file on his laptop that we want to load into Azure SQL DB using SSMS and Bulk Insert. We can open the local file easily enough but we don't know how to reference this file in FROM clause.
Assuming a file named myData.txt is saved to
c:\Users\Someone
how do we tell Azure SQL DB where that file is?
You don't. :) You have to upload a file to an Azure Blob Store and then, from there, you can use BULK INSERT or OPENROWSET to open the file.
https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017
I've written an article that describes the steps to open a JSON file here:
https://medium.com/#mauridb/work-with-json-files-with-azure-sql-8946f066ddd4
I fixed this problem by uploading the file to a local database and then use a linked server to my Azure db to insert or update the record. Much easier than creating a Blob Storage. However, if the file is very big or you have a lot of files to upload you might not want to use my method as linked servers is not the quickest connection.

Resources