How can I check if a file has finished uploading before moving it with the Google Drive API v3? - node.js

I'm writing a small archiving script (in node.js) to move files on my Google Drive to a predetermined folder if they contain .archive.7z in the filename. The script is run periodically as a cron job, and the file movement has not caused any issues, but files still in the process of being uploaded by my desktop client are moved before they're finished. This terminates the upload and results in corrupted files in the destination folder.
Files still being uploaded from my desktop to Google Drive are returned by the following function anyway:
async function getArchivedFiles (drive) {
const res = await drive.files.list({
q: "name contains '.archive.7z'",
fields: 'files(id, name, parents)',
})
return res.data.files
}
Once the files are moved and renamed with the following code, the upload terminates from my client (Insync) and the destination files are ruined.
drive.files.update({
fileId: file.id,
addParents: folderId,
removeParents: previousParents,
fields: 'id, parents',
requestBody: {
name: renameFile(file.name)
}
})
Is there any way to check if a file is still being uploaded before moving it?

It turns out that a tiny placeholder-type file is being created on uploads. I'm not sure if this is a Google Drive API behaviour or something unique to the Insync desktop client. This file seems to upload separately and thus can be freely renamed once it's complete.
I worked around this problem by including the file's md5 hash in the filename, and updating my script to only move files when the hash in their filename matches the md5Checksum retrieved from the Google Drive API.

Related

Download xlsx file (or any formats that cannot be read by notepad) from Google Storage and store it locally

I currently have a node.js server running where I can grab a csv file stored in storage bucket and store that to a local file.
However, when I try to do the same thing with a xlsx file, it seems to mess up the file and cannot be read when I download it to a local directory.
Here is my code for getting the file to a stream:
async function getFileFromBucket(fileName) {
var fileTemp = await storage.bucket(bucketName).file(fileName);
return await fileTemp.download()
}
and with the data returned from above code, I store it into local directory by doing the following:
fs.promises.writeFile('local_directory', DataFromAboveCode)
It seems to work fine with .csv file but does not work with .xlsx file where I can open the csv file but xlsx file gets corrupted and cannot be opened.
I tried downloading the xlsx file directly from the storage bucket on google cloud console but it seems to work fine, meaning that somethings gone wrong in the downloading / saving process
Could someone guide me to what I am doing wrong here?
Thank you

How to get the file path in AWS Lambda?

I would like to send a file to Google Cloud Platform using their client library such on this this example (Node.js code sample): https://cloud.google.com/storage/docs/uploading-objects
My current code looks like this:
const s3Bucket = 'bucket_name';
const s3Key = 'folder/filename.extension';
const filePath = s3Bucket + "/" + s3Key;
await storage.bucket(s3Bucket).upload(filePath, {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
},
});
But when I do this there is an error:
"ENOENT: no such file or directory, stat
'ch.ebu.mcma.google.eu-west-1.ibc.websiteExtract/AudioJobResults/audioGoogle.flac'"
I also tried to send the path I got in AWS Console (Copy path button) "s3://s3-eu-west-1.amazonaws.com/ch.ebu.mcma.google.eu-west-1.ibc.website/ExtractAudioJobResults/audioGoogle.flac", but did not work.
You seem to be trying to copy data from S3 to Google Cloud Storage directly. This is not what your example/tutorial shows. The sample code assumes that you upload a local copy of the data to Google Cloud Storage. S3 is not local storage.
How you could do it:
Download the data to /tmp in your Lambda function
Use the sample code above to upload the data from /tmp
(Optionally) Remove the uploaded data from /tmp
A word of caution: The available storage under /tmp is currently limited to 500MB. If you want to upload/copy files larger than that this won't work. Also beware that the lambda execution environment might be re-used so cleaning up after yourself (i.e. step 3) is probably a good idea if you plan to copy lots of files.

Sharepoint nodejs upload file

I'm trying to upload a file to the SharePoint shared folder.
I am able to upload a file using spsave but its uploading to the Files location. I would like to upload to the Shared location instead of Files location. Doing it in this way:
const fileOptions = {
folder: '/Documents/testing',
fileName: 'test.txt',
fileContent: 'hello world'
};
it's uploading to the Files location under testing folder I can see the file text.txt. Can't figure it out how to upload to the Shared folder. Is this possible? Can't find the related path for the shared destination. For Files it's Documents but what would be for the Shared??? Anyone have any knowledge about sharepoint onedrive and could help me with this?
I don't think you can upload to OneDrive with spsave. It is for native SharePoint only afaik. If you want to share a file in OneDrive using a rest api I suggest to look at office graph.

Downloading folders from Google Cloud Storage Bucket with NodeJS

I need to download folders with NodeJS from my Bucket from my Google Cloud Storage. I read all the documentation and I only found a way to download files and not folders. I need to get/download the folder to provide user's download files.
Could someone help me?
As Doug said, Google Cloud Storage would show you the structure of different directories, but there are actually no folders within the buckets.
However, you can find perform some workarounds within your code to create that very same folder structure yourself. For the workaround I came up with, you need to use libraries such as shelljs, which will allow you to create folders in your system.
Following this GCP tutorial on Cloud Storage, you will find examples on, for instance, how to list or download files from your bucket.
Now, putting all this together, you can get the full path of the file you are going to download, parse it to separate the folders from the actual file, then create the folder structure using the method mkdir from shelljs.
For me, modifying the method for downloading files in the tutorial, was something like this:
var shell = require('shelljs');
[...]
async function downloadFile(bucketName, srcFilename, destFilename) {
// [START storage_download_file]
// Imports the Google Cloud client library
const {Storage} = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
//Find last separator index
var index = srcFilename.lastIndexOf('/');
//Get the folder route as string using previous separator
var str = srcFilename.slice(0, index);
//Create recursively the folder structure in the current directory
shell.mkdir('-p', './'+str);
//Path of the downloaded file
var destPath = str+'/'+destFilename;
const options = {
destination: destPath,
};
// Downloads the file
await storage
.bucket(bucketName)
.file(srcFilename)
.download(options);
console.log(
`gs://${bucketName}/${srcFilename} downloaded to ${destPath}.`
);
// [END storage_download_file]
}
You will want to use the getFiles method of Bucket to query for the files you want to download, then download each one of them individually. Read more about how to use the underlying list API. There are no folder operations in Cloud Storage (as there are not actually any folders, there are just file paths the look like they're organized as folders).

fineuploader server side renaming the file before the put method

Just starting to test the FineUploader and I wonder:
When FineUploader uploading files directly to a blob container on azure,
I see the files (guid name instead of the original).
Is there any option to set on the server side the file name and the full path to save the file ?
Yes, you can retrieve the name for any file before it is uploaded from your server via an ajax call and supply it to Fine Uploader Azure by making use of the fact that the blobProperties.name option allows for a promissory return value. For example:
new qq.azure.FineUploader({
blobProperties: {
name: function(fileId) {
return new Promise(function(resolve) {
// retrieve file name for this file from your server...
resolve(filenameFromServer)
})
}
},
// all other options here...
})
The above option will be called by Fine Uploader Azure once per file, just before the first request is sent. This is true of chunked and non-chunked uploads. The value passed into resolve will be used as the new file name for the associated file.

Resources