Files don't by fs.writeFile() on Heroku - node.js

JSON-files don't write by fs.writeFile on Heroku.
Console is clear.
fs.writeFile('${__dirname}/config.json', JSON.stringify(config),(err) => {
if(err) console.log(err);
});

You can't persistently write files to Heroku's filesystem, which is ephemeral. Any changes you make will be lost the next time your dyno restart, which happens frequently (at least once per day).
Use a client-server database like PostgreSQL (or choose another service), or store files on a third-party object storage service like Amazon S3 instead.

Two suggestions:
Try using "writeFileSync" function instead of "writeFile"
Make a function and include your line in the body. Add "await" to the first line. Then put "async" at the front. Then call the function. For example:
const myWriteFunction = async (filename) => {
await fs.writeFile(filename, 'Hello World')
}

Related

Heroku doesn't allow temporary downloads. How can I work around this limitation?

I am trying to download images from MongoDB every time my app starts so it will work fast and as the images are in the app, but Heroku crashes. How can I solve this?
Here is the code I'm trying to use:
dir = "./public/media/"
function getAllImages() {
Image.find({}, function (err, allImages) {
if (err) {
console.log(err);
} else {
allImages.forEach(file => {
fs.writeFile(dir + file.name, file.img.data, function (err) {
if (err) throw err;
console.log('Sucessfully saved!');
});
});
};
});
I currently have 24 images which add up to approximately 10 MB. I will use them as static images in my application. I would like to access them via example.com/media/foo.jpg, etc.
User uploads can't be stored on Heroku's ephemeral filesystem. Any changes made to it will be lost whenever your dyno restarts, which happens frequently (at least once per day). Heroku recommends storing uploaded files on a service like Amazon S3.
You can have your users upload files directly from their browsers to S3 or you could use the AWS SDK to save files from your back-end. A higher-level library like multer-s3 might be helpful too.
It's not usually a good idea to store files in your database, but you can store a pointer to files in your database. For example, you might store https://domain.tld/path/to/some/image.jpg in your database if that's where the file actually lives.
I just learn that the problem was the folder that i use (./public/media) was empty so heroku (even though it is there in the git system) did not create the folder. Because of this the code fs.writeFile(dir + file.name, file.img.data, function (err) didn't work. Thanks for the answers.

NodeJS stream out of AWS Lambda function

We are trying to migrate our zip microservice from regular application in nodejs Express to AWS API Gateway integrated with AWS Lambda.
Our current application sends request to our API, gets list of attachments and then visits those attachments and pipes their content back to user in form of zip archive. It looks something like this:
module.exports = function requestHandler(req, res) {
//...
//irrelevant code
//...
return getFileList(params, token).then(function(fileList) {
const filename = `attachments_${params.id}`;
res.set('Content-Disposition', `attachment; filename=${filename}.zip`);
streamFiles(fileList, filename).pipe(res); <-- here magic happens
}, function(error) {
errors[error](req, res);
});
};
I have managed to do everything except the part where I have to stream content out of Lambda function.
I think one of possible solutions is to use aws-serverless-express, but I'd like a more elegant solution.
Anyone has any ideas? Is it even possible to stream out of Lambda?
Unfortunately lambda does not support streams as events or return values. (It's hard to find it mentioned explicitly in the documentation, except by noting how invocation and contexts/callbacks are described in the working documentation).
In the case of your example, you will have to await streamFiles and then return the completed result.
(aws-serverless-express would not help here, if you check the code they wait for your pipe to finish before returning: https://github.com/awslabs/aws-serverless-express/blob/master/src/index.js#L68)
n.b. There's a nuance here that a lot of the language SDK's support streaming for requests/responses, however this means connecting to the stream transport, e.g. the stream downloading the complete response from the lambda, not listening to a stream emitted from the lambda.
Had the same issue, now sure how you can do stream/pipe via the native lambda + API Gateway directly... but it's technically possible.
We used Serverless Framework and were able to use XX.pipe(res) using this starter kit (https://github.com/serverless/examples/tree/v3/aws-node-express-dynamodb-api)
What's interesting is that this just wraps over native lambda + API Gateway so, technically it is possible as they have done it.
Good luck

Azure Functions: Nodejs, What are restrictions / limitations when using file system?

I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.

Best NodeJS Workflow for team development

I'm trying to implement NodeJS and Socket.io for real time communication between two devices (PC & Smartphones) in my company product.
Basically what I want to achieve is sending a notification to all online users when somebody change something on a file.
All the basic functionality for saving the updates are already there and so, when everything is stored and calculated, I send a POST request to my Node server saying that something changed and he need to notify the users.
The problem now is that when I want to change some code in the NodeJS scripts, as long as I work alone, I can just upload the new files via FTP and just restart the pm2 service, but when my colleagues will start working with me on this story we will have problems merging our changes without overlapping each other.
Launching a local server is also not possible because we need the connection between our current server and the node machine and since our server is online it cannot access our localhosts.
It's there a way for a team to work together in the same Node server but without overlapping each other ?
Implement changes using some other option rather than FTP. For example:
You can use webdav-fs in authenticated or non-authenticated mode:
// Using authentication:
var wfs = require("webdav-fs")(
"http://example.com/webdav/",
"username",
"password"
);
wfs.readdir("/Work", function(err, contents) {
if (!err) {
console.log("Dir contents:", contents);
} else {
console.log("Error:", err.message);
}
});
putFileContents(remotePath, format, data [, options])
Put some data in a remote file at remotePath from a Buffer or String. data is a Buffer or a String. options has a property called format which can be "binary" (default) or "text".
var fs = require("fs");
var imageData = fs.readFileSync("someImage.jpg");
client
.putFileContents("/folder/myImage.jpg", imageData, { format: "binary" })
.catch(function(err) {
console.error(err);
});
And use callbacks to notify your team, or lock the files via the callback.
References
webdav-fs
webdav
lockfile
Choosing Secure Passwords

simple Azure website (built with nodejs), how to log http get request

I have a simple Azure website (free or shared) which is built with nodejs/expressjs. There is no local or database storage.
I'd like to save incoming http/get request for further analysis. I guess I can't just save req to the local drive/json temp file.
Is there a way to save to some log file that I can ftp download later?
simpler and less cost, the better.
Something like this:
fs = require('fs');
function homePage(req, res){
var d = new Date();
var logDate = d.getTime();
fs.writeFile(logDate+'.txt', req, function (err) {
if (err) return console.log(err);
console.log('Logged');
});
}
So the first line we call node's file system as a requirement. Then we write a homepage route that created a date variable to use as the log's file name. After that we use fs to write the request to the file.
You'll need to do some tinkering to optimize readability but this'll get you started. File's shouldn't overwrite since we used the time, but it might overwrite if you get huge traffic.

Resources