How to make all files accessible in a single Google Drive folder? - node.js

I am trying to access media files in a folder that is created by my webapp using the official google api library for node.js (https://github.com/googleapis/google-api-nodejs-client/).
On successful oauth, I create a folder MyAppFolder, and a media folder inside it.
The idea is that users will fill this media folder with whatever photos and videos they want, then my webapp will take them and display them on a page for the user to get an aggregate view of their media. I am able to get all the media files within the media folder. Here is my snippet of code:
async function getGoogleDriveMedia({ user, credentials }) {
// await prepareDrive({ user, credentials });
const rootFolder = await getRootFolder({ user, credentials });
if (!rootFolder) {
return;
}
const client = await createOauthClient({ user, credentials });
const drive = google.drive({
version: 'v3',
auth: client,
});
const mediaFolderRes = await drive.files.list({
q: `mimeType='application/vnd.google-apps.folder' and name='media' and '${
rootFolder.id
}' in parents`,
fields: 'nextPageToken, files(id, name, parents)',
});
const mediaFolder = _.get(mediaFolderRes, 'data.files[0]');
if (!mediaFolder) {
return;
}
const mediaRes = await drive.files.list({
q: `'${
mediaFolder.id
}' in parents and (mimeType contains 'image/' or mimeType contains 'video/')`,
fields:
'nextPageToken, files(' +
[
'id',
'name',
'webContentLink',
'webViewLink',
].join(', ') +
')',
});
return _.get(mediaRes, 'data.files');
}
The problem now is that I'm not able to display these media because they are not publicly accessible. Is it possible to make the MyAppFolder and everything within it accessible to the public with a single permissions update? Or do I need to do it per file?
I also did a check on the fields.files parameter on their API explorer: https://developers.google.com/apis-explorer/#search/drive.files.list/m/drive/v3/drive.files.list
There isn't something like a .previewLink or some other image URL field.
How can I show these images on my webapp?

Have a look at the file.thumbnailLink. It may have enough resolution for your purposes and it's public. NB imho, the fact that it's public is a security bug which might get fixed at some point in the future.
NB there is a bug in your code. You are fetching nextPageToken correctly, but then you aren't using it to test if there are more pages in the files list that you need to fetch.

Is it possible to make the MyAppFolder and everything within it accessible to the public with a single permissions update? Or do I need to do it per file?
No there is no way to update everything in a file at once. You are going to have to update each file. Even updating the folder will not change the settings for the files with in the folder.
Just have your application change the permissions on the file after its uploaded. In the mean time run though all the existing files and change their permissions.
Note: I hope you have informed your users that their files will be made public.

Related

Download file from HTTP Drive API v3 with real file name

I have a custom REST Service in which a user from our platform downloads packages and binaries, but the problem is that the GDrive API downloads the file with the FileID as the file name:
async Download(fileID, res, type='stream') {
var Google = await CloudStorage.Initialize();
https.get(Google.API.STORAGE.Files.Download(fileID), {
headers: {
"Authorization": "Bearer " + Google.AccessToken.token,
"Content-Type": "text/plain"
},
responseType: type
}, (resApi) => {
res.writeHead(resApi.statusCode);
resApi.pipe(res);
}).end();
}
Example: https://MyUrl.com/beta/plugins/download/ABCDE12345
Where ABCDE12345 is the FileID of the file required by Google Drive API in order to GET the file.
The pipe of the response from the API indeed makes the downloaded file be named ABCDE12345.
Is there a way to make the download similar as doing it directly from the Google Drive Link?
When you download the file from the "Download" button through the Google Drive link it does download the file with the real name... How could I achieve this with my endpoint?
So,I had to use my girlfriend as the "rubber duck" and explain her my code when it came to my head while explaining: Use a fake endpoint for the named files.
My solution was to trick the endpoint by assigning a filename in a path parameter:
router.get('/dl/:id/:filename', async (req, res) => {
CloudStorage.Download(req.params.id, req.params.filename, res);
});
Now, having the :filename as a parameter allows me to verify it against the GDrive API since it's available for the given ID
In the end, it is now used as: https://MyUrl.com/beta/plugins/dl/ABCDE12345/Real File Name.ext
I can verify if Real File Name.ext is the real file name, if it is, then download it and the endpoint will allow me to create a file with this name :)

nodejs - Getting 401 error trying to download docs revisions with Google Drive API export links

I need to download all the revisions of a google doc with the Drive API using nodejs but I don't understand how to authorize the request for the export links. Once I get the export link for each revision I call:
var options = {
url: 'https://docs.google.com/feeds/download/documents/export/Export?id=1DRl6rbcVuuLVyb_WlhBLiYiCByWcS2bKGlLIsn7E8_8&revision=1&exportFormat=txt', //example link
method: 'GET',
headers: {
Authorization: `Bearer ${jwToken}`,
},
}
request(options).pipe(fs.createWriteStream(mydownloadfilename));
where the "jwToken" is the token I use to get the revisions list so I guess it should be still valid. However, with this I get the 401-Unauthorized page. What am I doing wrong?
Thanks
According to the Drive API v3 documentation:
Revisions for Google Docs, Sheets, and Slides can't be downloaded.
So essentially, if the actual revision you want to retrieve is the file itself, then the method above is the correct one.
As for the authorization part, you will need to perform the Node.js Quickstart from here and follow the steps explained there.
Since you want to export the file, you will just need to modify the code and add this part:
function downloadDoc() {
var fileId = 'ID_OF_THE_DOC';
var dest = fs.createWriteStream('DESTINATION_OF_THE_OUTPUT_STREAM');
drive.files.export({
fileId: fileId,
mimeType: 'application/vnd.google-apps.document'
})
.on('end', function () {
console.log('Done');
})
.on('error', function (err) {
console.log('Error during download', err);
})
.pipe(dest);
}
Reference
Drive API v3 - Manage Revisions;
Drive API v3 - Files:export;
Drive API v3 - Quickstart;
Drive API v3 - Download a document.
I am facing this same problem. The solution is to use
OAUTH2 authorization for a user that has Edit or Owner permissions for a file
Get an access token that expires quickly
call the V2 URI (V3 does not work) for the file/rev to get "export links" 3) Call the correct export link for your format type
then you will get a randomized temporary redirect link from Google that you can then call to get the binary stream.
This is a great starting point for C# .NET -- windows oauth console app, if you want working code to do steps 1 and 2. I posted a working v2 code function here that you can put into the console app example.

unable to make folders in firebase storage bucket - firebase admin

well in my case I have a list of Url and I want to download each and every file from those urls and organise it in firebase storage bucket, my problem is I am unable to make folders in firebase storage bucket through nodejs javascript/typescript.
well firebase storage offers ref() and child method to upload files inside child folder (see this) but firebase only offers those method for firebase client libraries, it is not that we can not use client library in nodejs but they have made some namespaces hidden when you connect firebase client library in nodejs and storage is one of them (see this).
I am happy they have considered frontend and backend separately because of this very reason that front and backend have whole different scenario for security and use cases, so what they have really written to use in nodejs is firebase admin and I cannot see ref and child method in official documentation which they have said is this not any other way to name the file I am uploading nor any method for making folders to go child directories, when I upload a file from my computer it get saved in the bucket root with the same name as the filename it was in my computer, even though I can make folders from firebase console manually but it will not fulfill my requirement for sure there should must be any way to make folders in programmatically.
I also tried using google cloud storage library const {Storage} = require('#google-cloud/storage');
but it turned out firebase admin and gogole cloud library shares the same document and have same interface at least in upload file part.
well I have spent my day (well night too since it is 4:46am) trying different libraries and digging into their documents which I also found little unorganised and lack of code examples.
any help would be appreciated, my code snippet so far is following which is from their doc and uploading file correctly:
import "firebase/firestore"
admin.initializeApp({
credential: admin.credential.cert("./../path-to-service account-cert.json"),
databaseURL: 'gs://bilal-assistant-xxxxx.appspot.com'
});
const quran_bucket = admin.storage().bucket("quran-bucket");
quran_bucket.upload("./my_computer_path/fatiha.mp3", {
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
}
}).then(uploadResponse => {
console.log(` uploaded complete.`);
}).catch((reason: any) => {
console.log("reason: ", reason);
})
All I wanted is to save the audio file in folder bucket, not in bucket root
According to the API documentation, upload() takes an UploadOptions object as the second parameter. You will want to used the documented destination property of that object to specify the name of the file in Storage:
quran_bucket.upload("./my_computer_path/fatiha.mp3", {
destination: 'audio/juz30/fatiha.mp3',
gzip: true,
metadata: {
cacheControl: 'public, max-age=31536000',
}
})
You probably don't want to bother the gzip an mp3, as it's already compressed and won't compress much further.

Generate .ics url like Basecamp in Nodejs

I am working on nodeJs(backend) and (Angular)
I want to generate a .ics file URL for google calendar and for Apple and Microsoft as a downloadable file.
I know there is a node module ics and I am using that, but that only creates a .ics file I want this to be unique for each user and also want this to delete automatically.
Also, it should automatically sync with the events added.
any suggestion for this?
I have been unable to find a way that would continually update someone's schedule since this would require continuous access to someone's calendar. While building a scheduling site I got around your storage and unique file problem by using a brute force solution.
First, as a client access the download endpoint an unique ics file is generated and stored as schedule_date_client-name.ics. This unique file is then sent to the user using res.download and promptly deleted using fs.unlink(path_to_file).
Here is an example of this in action :
try {
res.download(path, function(error){
if (error) {
console.log("Error : ", error)
}
fs.unlink(path, (error) => {
// log any error
if (error) {
console.log(error);
}
})
});
} catch (e) {
next(e);
}
The best way I found around this is to generate the .ics file as the user accesses a /download url endpoint. Send the file in a downloadable format using res.download in your controller file. Here is more information on the packages I used for this solution :
Node .ics : https://www.npmjs.com/package/ical-generator
Node file : https://nodejs.dev/learn/the-nodejs-fs-module
Require the modules :
const ical = require('ical-generator');
const fs = require("fs");

Google Cloud Storage creating content links with inconsistent behavior

I'm working on a project using Google Cloud Storage to allow users to upload media files into a predefined bucket using Node.js. I've been testing with small .jpg files. I also used gsutil to set bucket permissions to public.
At first, all files generated links that downloaded the file. Upon investigation of the docs, I learned that I could explicitly set the Content-Type of each file after upload using the gsutil CLI. When I used this procedure to set the filetype to 'image/jpeg', the link behavior changed to display the image in the browser. But this only worked if the link had not been previously clicked prior to updating the metadata with gsutil. I thought that this might be due to browser caching, but the behavior was duplicated in an incognito browser.
Using gsutil to set the mime type would be impractical at any rate, so I modified the code in my node server POST function to set the metadata at upload time using an npm module called mime. Here is the code:
app.post('/api/assets', multer.single('qqfile'), function (req, res, next) {
console.log(req.file);
if (!req.file) {
return ('400 - No file uploaded.');
}
// Create a new blob in the bucket and upload the file data.
var blob = bucket.file(req.file.originalname);
var blobStream = blob.createWriteStream();
var metadata = {
contentType: mime.lookup(req.file.originalname)
};
blobStream.on('error', function (err) {
return next(err);
});
blobStream.on('finish', function () {
blob.setMetadata(metadata, function(err, response){
console.log(response);
// The public URL can be used to directly access the file via HTTP.
var publicUrl = format(
'https://storage.googleapis.com/%s/%s',
bucket.name, blob.name);
res.status(200).send(
{
'success': true,
'publicUrl': publicUrl,
'mediaLink': response.mediaLink
});
});
});
blobStream.end(req.file.buffer);
});
This seems to work, from the standpoint that it does actually set the Content-Type on upload, and that is correctly reflected in the response object as well as the Cloud Storage console. The issue is that some of the links returned as publicUrl cause a file download, and others cause a browser load of the image. Ideally I would like to have both options available, but I am unable to see any difference in the stored files or their metadata.
What am I missing here?
Google Cloud Storage makes no assumptions about the content-type of uploaded objects. If you don't specify, GCS will simply assign a type of "application/octet-stream".
The command-line tool gsutil, however, is smarter, and will attach the right Content-Type to files being uploaded in most cases, JPEGs included.
Now, there are two reasons why your browser is likely to download images rather than display them. First, if the Content-Type is set to "application/octet-stream", most browsers will download the results as a file rather than display them. This was likely happening in your case.
The second reason is if the server responds with a 'Content-Disposition: attachment' header. This doesn't generally happen when you fetch GCS objects from the host "storage.googleapis.com" as you are doing above, but it can if you, for instance, explicitly specified a contentDisposition for the object that you've uploaded.
For this reason I suspect that some of your objects don't have an "image/jpeg" content type. You could go through and set them all with gsutil like so: gsutil -m setmeta 'Content-Type:image/jpeg' gs://myBucketName/**

Resources