simple Azure website (built with nodejs), how to log http get request - node.js

I have a simple Azure website (free or shared) which is built with nodejs/expressjs. There is no local or database storage.
I'd like to save incoming http/get request for further analysis. I guess I can't just save req to the local drive/json temp file.
Is there a way to save to some log file that I can ftp download later?
simpler and less cost, the better.

Something like this:
fs = require('fs');
function homePage(req, res){
var d = new Date();
var logDate = d.getTime();
fs.writeFile(logDate+'.txt', req, function (err) {
if (err) return console.log(err);
console.log('Logged');
});
}
So the first line we call node's file system as a requirement. Then we write a homepage route that created a date variable to use as the log's file name. After that we use fs to write the request to the file.
You'll need to do some tinkering to optimize readability but this'll get you started. File's shouldn't overwrite since we used the time, but it might overwrite if you get huge traffic.

Related

Files don't by fs.writeFile() on Heroku

JSON-files don't write by fs.writeFile on Heroku.
Console is clear.
fs.writeFile('${__dirname}/config.json', JSON.stringify(config),(err) => {
if(err) console.log(err);
});
You can't persistently write files to Heroku's filesystem, which is ephemeral. Any changes you make will be lost the next time your dyno restart, which happens frequently (at least once per day).
Use a client-server database like PostgreSQL (or choose another service), or store files on a third-party object storage service like Amazon S3 instead.
Two suggestions:
Try using "writeFileSync" function instead of "writeFile"
Make a function and include your line in the body. Add "await" to the first line. Then put "async" at the front. Then call the function. For example:
const myWriteFunction = async (filename) => {
await fs.writeFile(filename, 'Hello World')
}

How to code a simple sync REST API to check if a file exist in the server with node.js?

I need to code a REST API that can check if a PDF file exist in a specific folder in the server.
The client send GET request and server should wait before send response, until the PDF file exist.
When the PDF file appears in the folder, the server need to response filename to client.
I think using node.js with express and socket.io to do this.
Do you think it's the right way ?
Have you got a code example for sync wait and file check response ?
Thanks
Before coding REST API routes, i prefer in a first step to code file checking function.
I tested fs.existsSync not really good
const fs = require('fs')
const path = './*.pdf'
if (fs.existsSync(path)) {
//file exists
}
and i am going to test maybe with glob.sync or glob-fs
I don't know what the good way for this first step
Update :
Glob-fs seems to be ok, but I need a wait time until .PDF file arrived on the server fs.
var glob = require('glob-fs')({ gitignore: true });
glob.readdir('**/*.pdf', function(err, files) {
console.log(files);
});
REST API is not what you are looking for. You should not stall your node.js server.
You should use Websocket: You can register your application as interested to know when a file appears in a directory. Then, when that event occurs, the server sends you a notification. No waiting.
Check https://www.tutorialspoint.com/websockets/index.htm for more info about Websockets.
Check https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener for watching file modifications
Here a code using Chokidar to watch PDF file creation :
var fileWatcher = require("chokidar");
// Initialize watcher.
var watcher = fileWatcher.watch("./*.pdf", {
ignored: /[\/\\]\./,
persistent: true
});
// Add event listeners.
watcher
.on('add', function(path) {
console.log('File', path, 'has been added');
})

best way to save images on a mongoose domain

I'm new to node.js and i'm trying to make an application which saves photos of users just like a normal application. The user can set a profile picture and could add other pictures as well in their wall.
I'm also done with other parts of my application, but I'm trying to figure out what would be the best way to save those images -- since my application should be able to scale the number of users to a big number.
I referenced :
How to upload, display and save images using node.js and express ( to save images on server)
and also: http://blog.robertonodi.me/managing-files-with-node-js-and-mongodb-gridfs/ (to save images on mongo via grid-fs)
and I'm wondering what would be the best option.
So, could you please suggest what I must be rooting for?
Thanks,
It depends on your application needs, one thing I've did for a similar applications was to create an abstraction over the file storage logic of the server application.
var DiskStorage = require('./disk');
var S3Storage = require('./s3');
var GridFS = require('./gridfs');
function FileStorage(opts) {
if (opts.type === 'disk') {
this.storage = new DiskStorage(opts);
}
if (opts.type === 's3') {
this.storage = new S3Storage(opts);
}
if (opts.type === 'gridfs') {
this.storage = new GridFS(opts);
}
}
FileStorage.prototype.store = function(opts, callback) {
this.storage.store(opts, callback);
}
FileStorage.prototype.serve = function(filename, stream, callback) {
this.storage.serve(filename, stream, callback);
}
module.exports = FileStorage;
Basically you will have different implementation for your logic to store user uploaded content. And when you need it you can just scale from your local file storage/mongo gridfs to a S3 maybe. But for a seamless transition when you store the user file relationship in your database you could store also the file provider, local or S3.
Saving images directly to the local file system can be sometimes a little bit complicated when we are talking about many uploaded content, you could easily run into limitations like How many files can I put in a directory?. GridFS should not have such a problem, I've had pretty good experience using MongoDB for file storage, but this depends from application to application.

Azure Functions: Nodejs, What are restrictions / limitations when using file system?

I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.

What is the most efficient way of sending files between NodeJS servers?

Introduction
Say that on the same local network we have two Node JS servers set up with Express: Server A for API and Server F for form.
Server A is an API server where it takes the request and saves it to MongoDB database (files are stored as Buffer and their details as other fields)
Server F serves up a form, handles the form post and sends the form's data to Server A.
What is the most efficient way to send files between two NodeJS servers where the receiving server is Express API? Where does the file size matter?
1. HTTP Way
If the files I'm sending are PDF files (that won't exceed 50mb) is it efficient to send the whole contents as a string over HTTP?
Algorithm is as follows:
Server F handles the file request using https://www.npmjs.com/package/multer and saves the file
then Server F reads this file and makes an HTTP request via https://github.com/request/request along with some details on the file
Server A receives this request and turns the file contents from string to Buffer and saves a record in MongoDB along with the file details.
In this algorithm, both Server A (when storing into MongoDB) and Server F (when it was sending it over to Server A) have read the file into the memory, and the request between the two servers was about the same size as the file. (Are 50Mb requests alright?)
However, one thing to consider is that -with this method- I would be using the ExpressJS style of API for the whole process and it would be consistent with the rest of the app where the /list, /details requests are also defined in the routes. I like consistency.
2. Socket.IO Way
In contrast to this algorithm, I've explored https://github.com/nkzawa/socket.io-stream way which broke away from the consistency of the HTTP API on Server A (as the handler for socket.io events are defined not in the routes but the file that has var server = http.createServer(app);).
Server F handles the form data as such in routes/some_route.js:
router.post('/', multer({dest: './uploads/'}).single('file'), function (req, res) {
var api_request = {};
api_request.name = req.body.name;
//add other fields to api_request ...
var has_file = req.hasOwnProperty('file');
var io = require('socket.io-client');
var transaction_sent = false;
var socket = io.connect('http://localhost:3000');
socket.on('connect', function () {
console.log("socket connected to 3000");
if (transaction_sent === false) {
var ss = require('socket.io-stream');
var stream = ss.createStream();
ss(socket).emit('transaction new', stream, api_request);
if (has_file) {
var fs = require('fs');
var filename = req.file.destination + req.file.filename;
console.log('sending with file: ', filename);
fs.createReadStream(filename).pipe(stream);
}
if (!has_file) {
console.log('sending without file.');
}
transaction_sent = true;
//get the response via socket
socket.on('transaction new sent', function (data) {
console.log('response from 3000:', data);
//there might be a better way to close socket. But this works.
socket.close();
console.log('Closed socket to 3000');
});
}
});
});
I said I'd be dealing with PDF files that are < 50Mb. However, if I use this program to send larger files in the future, is socket.io a better way to handle 1GB files as it's using stream?
This method does send the file and the details across but I'm new to this library and don't know if it should be used for this purpose or if there is a better way of utilizing it.
Final thoughts
What alternative methods should I explore?
Should I send the file over SCP and make an HTTP request with file details including where I've sent it- thus, separating the protocols of files and API requests?
Should I always use streams because they don't store the whole file into memory? (that's how they work, right?)
This https://github.com/liamks/Delivery.js ?
References:
File/Data transfer between two node.js servers this got me to try socket-stream way.
transfer files between two node.js servers over http for HTTP way
There are plenty of ways to achieve this , but not so much to do it right !
socket io and wesockets are efficient when you use them with a browser , but since you don't , there is no need for it.
The first method you can try is to use the builtin Net module of nodejs, basically it will make a tcp connection between the servers and pass the data.
you should also keep in mind that you need to send chunks of data not the entire file , the socket.write method of the net module seems to be a good fit for your case check it : https://nodejs.org/api/net.html
But depending on the size of your files and concurrency , memory consumption can be quite large.
if you are running linux on both servers you could even send the files at ground zero with a simple linux command called scp
nohup scp -rpC /var/www/httpdocs/* remote_user#remote_domain.com:/var/www/httpdocs &
You can even do this with windows to linux or the other way.
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
the client scp for windows is pscp.exe
Hope this helps !

Resources