node.js fs Readstream event handler never called in Node 12.3.1 - node.js

In custom build CEF-74 with nodeJS 12.3.1 -> we're loading a webApp from which we're downloading images from google drive and google photos, first time downloading of image works fine, but for already downloaded images, node.js fs.CreateReadStream() event handler checks MD5 hash of local file, determines that the asset has already been downloaded.
which is not working here in this case unless we enable --mixed-context on.
what role does --mixed-context might be playing here to make this workflow of md5 hash work?

Related

updating a deployment - uploaded images gets deleted after redeployment to google cloud

So I have a node js web app, this web app has a folder to store images uploaded by users from a mobile app. How I upload the image to the folder is by using the image's base64 string, and using fs.writeFile to save the image to the folder, like this:
fs.writeFile(__dirname + '/../images/complaintImg/complaintcase_' + data.cID + '.jpg', Buffer.from(data.complaintImage, 'base64'), function (err) {
if (err) {
console.log(err);
} else {
console.log("success");
}
});
The problem is, whenever the application is redeployed to google cloud, the images gets deleted. This is because the image folder of the local version of the application is empty - when the user uploads an image, i don't get a local copy of that image.
How do i prevent the images from getting deleted with every deployment? because the app is constantly updated (changes to js or html files), i can't have the images getting deleted with every deployment. How do i update a deployment to only deploy certain files? the gcloud app deploy command seems to deploy the entire project. or should i upload the images directly to google cloud storage?
please help, currently the mobile app isn't released to the public yet, so having the images deleted with every deployment is still not a big problem now, but it will be once it's released to the public. because the images they upload are very important. thank you in advance!
It appears that your __dirname directory you chose may be under /tmp or, if you use the flexible environment, some other directory local to your instance. If so the images will disappear whenever new instances are started (which always happens at new deployment, but it can happen in between deployments as well). This is expected, the instances are always started "from scratch".
You need to store the files that your app creates and you want to survive instance (re)starts on a persistent storage product, like Cloud Storage, see Using Cloud Storage (or Using Cloud Storage for flexible env). Note that you can't use the regular filesystem calls with Cloud Storage, you need to use the documented client library.
As stated in Dan Cornilescu's answer, for user uploaded files, you should store them in Cloud Storage for GAE Standard or for GAE Flexible.
Just as a reference, there is an alternative for those who are using Python 2.7, Java 8 or PHP 5, which is the BlobStore API

Jimp write image to Google cloud storage node js

I'm currently writing images to my filesystem using jimp.
For example: image.write('path')
This cannot be used on Google App Engine because its read only filesystem.
How can I write this to a Google storage bucket? I've tried writestreams but keep getting read only errors so I feel like jimp is still writing to the drive.
Thanks heaps
As I can understand you are modifying some images on your App Engine and you want to upload them to a bucket but you didn't mention if you are using standard or flex environment. In order to do this you want to make your publicly readable so it can serve files.
Following this Google Cloud Platform Node.js documentation you can see that to upload a file to a bucket you need to create object first using:
const blob = bucket.file(yourFileName);
Then using createWriteStream you can upload the file

Gun.js why do I get the error "You have no persistence layer to save to error"

I'm trying out gun.js I have it installed as a node.js project, I have configured the amazon S3 bucket through the dotenv and I have tried adding a data.json file and still I cant get gun.js to save the file locally or to he S3 bucket.
I know its early days for gun, but I get the feeling I'm missing something obvious.
I'm expecting to find a .json file in he local file system and or in the S3 bucket but I get neither.
require('dotenv').config();
var Gun = require('gun');
var gun = Gun({
file: 'data.json', // local testing and development
s3: {
key: process.env.AWS_KEY, // AWS Access Key
secret: process.env.AWS_SECRET, // AWS Secret Token
bucket: process.env.AWS_BUCKET // The bucket you want to save into
}
});
gun.put({ hello: 'world' }).key('my/first/data');
#bill Just noticed this now, sorry for the late answer. Thanks to #paul-w for notifying me of this and his response earlier today.
This question and answer assumes you are running a version EARLIER than v0.4.x!
If you are in NodeJS and are getting the error “You have no persistence layer to save to”, it means the default storage drivers (S3, file.js) didn't get installed or were deactivated - which is unusual as this happens automatically.
Try installing gun (again?) via npm install gun in your local NodeJS project directory, not a git clone or a copy&paste.
I can only guess, given the context you explain, that you might have copied/moved gun (like the gun.js file) into your project. The browser will work with just the single file, but NodeJS needs more - it needs the S3/file.js modules, which will be included if installed with npm or properly git cloned.
Also unlikely (since your code doesn't show this), if you happen to (this is bad) Gun({wire: {put: null, get: null}}) (or something similar) it would intentionally break the persistence drivers.
If you are in the browser and getting the error (and assuming your not overwriting the persistence drivers like in the previous paragraph), it could be because of some weird situation like you are using an old version of IE or a browser that doesn't have JSON support. Again, all these things are unlikely but I'm just wanting to be comprehensive.
Note: The above applies to the question in your title. However your actual question doesn't ask about the error, it asks about not seeing data in data.json or in S3. Answering that below.
To which #paul-w is more on track. If you are using S3 then the file.js module (data.json) automatically deactivates itself. If you are using the file.js module (data.json) then S3 does not get activated. As #paul-w mentioned, v0.4.x will support easily having multiple storage engines simultaneously. However, you should see your data in at least one or the other - unless you are getting the "no persistence layer" error, in which case you won't see your data anywhere because there isn't any persistence! But again, default persistence layers are included with gun by default (unless installation was incorrect, or you explicitly overwrite them - both unusual things).
I hope this answers your question. Sorry I didn't see it till now. Please let me know if this works, and also join the conversation at https://gitter.im/amark/gun . Thank you for helping start the stackoverflow questions! We need more of these!
I think Mark is going to answer this more officially, but the quick answer is that in gun.js 0.3 (current) there is a single gun server peer or storage target, and when you run gun as a server (e.g. from node.js rather than a browser), S3 is preferred, if S3 credentials are specified. But gun is also saving your data changes in browser memory, or localStorage (up to the browser limit of 5MB), and S3 is there for a more permanent storage.
So in the example above, I think the problem is that the file entry will only be used if there is a problem saving changes to S3, and that's why you don't see the new data going there. Maybe try putting an error in the S3 credentials (e.g. add an 'x' for now) and see if it starts using the file path instead.
In gun.js 0.4 there are plans to make use of all peers specified in the constructor or dynamically, but that feature isn't here yet.
(And I probably butchered that answer, but hopefully Mark can correct any inaccuracies in this. I'm new to gun.js but had the same question.)

Temporary File Download

Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.

Upload filestream to cloud-files with Node.js

I know that it's possible to upload files to my cloud-files account in Node.js, using the following module: node-cloudfiles.
But is it also possible to upload a filestream directly?
In my case I am dowloading an image from a certain location in Node.js and want to upload this directly to my cloud-files account without saving the image temporary on my server.
Of course it is possible - you can just read the documentation on Rackspace Cloud Files API ( http://docs.rackspacecloud.com/files/api/cf-devguide-latest.pdf ) and implement the necessary parts yourself.
However, I'd suggest to wait until https://github.com/nodejitsu/node-cloudfiles/pull/11 gets integrated into the trunk - then node-cloudfiles library will support uploading files using streams so you won't have to create files before uploading.

Resources