Store WebM file in Redis (NodeJS) - node.js

I'm searching a solution to store a WebM file into Redis.
Let's explain the situation:
The NodeJS server receive a WebM file from a client, and save it into server file system.
Then it have to save this file in redis, because I don't want to manage redis and file system too. In this way I can delete the video just with redis command.
I think to read file with fs.readFile() and then save it into a Buffer, but I don't know witch encode format to use, and I don't know how to refer this process to give back the WebM video to a client when it make a request.
Is this a good way to proceed? Any suggestion?
PS: I use formidable to upload file.
EDIT: I found a way to proceed, but theres another problem:
var file = fs.readFileSync("./video.webm");
client.set("video1", file1, function(){
client.get("video1", function(err, data) {
var buffer = new Buffer(data, 'binary');
// file ≠ buffer
});
});
Is this an encode problem? Like unicode/UTF8/ASCII?
Maybe node and redis use different encode?

Solution found!
The problem became when you create the client object.
Usually this is what is done
var client = redis.createClient();
And return_buffers param will be set as false.
In this way
var client = redis.createClient(6379, '127.0.0.1', {
return_buffers: true,
auth_pass: null
});
everything gone right! ;)
this is the issue page they help me

I don't know much about NodeJS and WebM files.
Redis stores C char type on 8 bit String, so it should be binary friendly. Check the js code and configuration to ensure your js redis client sends / receives data as bytearray and not as UTF-8 string, probably there is a bad conversion in JS of data.

Related

How to encrypt a stream in node js

How to encrypt and decrypt a stream in node js without saving the file locally or converting it into buffer.
if there is no way then pls provide memory efficient and less storage consuming way to encrypt and decrypt stream in node js so that i can directly upload stream to google drive through api .
Finally i got solution to this question . I am posting the answer here for everyone who need this
I am using a library called aes-encrypt-stream
or you can use crpto library as well or any other.
Here
use stream.PassThrough
Here file.stream is my input stream
stream is my outputstream
From this code i am able to reduce memory consumption of server for encryption
const { createEncryptStream, createDecryptStream, setPassword } = require('aes-encrypt-stream');
setPassword(Buffer.from('your key here', 'hex'));
PassThroughStream = require('stream').PassThrough,
stream = new PassThroughStream();
await createEncryptStream(file.stream).pipe(stream);
now stream is encrypted

How to code a simple sync REST API to check if a file exist in the server with node.js?

I need to code a REST API that can check if a PDF file exist in a specific folder in the server.
The client send GET request and server should wait before send response, until the PDF file exist.
When the PDF file appears in the folder, the server need to response filename to client.
I think using node.js with express and socket.io to do this.
Do you think it's the right way ?
Have you got a code example for sync wait and file check response ?
Thanks
Before coding REST API routes, i prefer in a first step to code file checking function.
I tested fs.existsSync not really good
const fs = require('fs')
const path = './*.pdf'
if (fs.existsSync(path)) {
//file exists
}
and i am going to test maybe with glob.sync or glob-fs
I don't know what the good way for this first step
Update :
Glob-fs seems to be ok, but I need a wait time until .PDF file arrived on the server fs.
var glob = require('glob-fs')({ gitignore: true });
glob.readdir('**/*.pdf', function(err, files) {
console.log(files);
});
REST API is not what you are looking for. You should not stall your node.js server.
You should use Websocket: You can register your application as interested to know when a file appears in a directory. Then, when that event occurs, the server sends you a notification. No waiting.
Check https://www.tutorialspoint.com/websockets/index.htm for more info about Websockets.
Check https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener for watching file modifications
Here a code using Chokidar to watch PDF file creation :
var fileWatcher = require("chokidar");
// Initialize watcher.
var watcher = fileWatcher.watch("./*.pdf", {
ignored: /[\/\\]\./,
persistent: true
});
// Add event listeners.
watcher
.on('add', function(path) {
console.log('File', path, 'has been added');
})

What is the most efficient way of sending files between NodeJS servers?

Introduction
Say that on the same local network we have two Node JS servers set up with Express: Server A for API and Server F for form.
Server A is an API server where it takes the request and saves it to MongoDB database (files are stored as Buffer and their details as other fields)
Server F serves up a form, handles the form post and sends the form's data to Server A.
What is the most efficient way to send files between two NodeJS servers where the receiving server is Express API? Where does the file size matter?
1. HTTP Way
If the files I'm sending are PDF files (that won't exceed 50mb) is it efficient to send the whole contents as a string over HTTP?
Algorithm is as follows:
Server F handles the file request using https://www.npmjs.com/package/multer and saves the file
then Server F reads this file and makes an HTTP request via https://github.com/request/request along with some details on the file
Server A receives this request and turns the file contents from string to Buffer and saves a record in MongoDB along with the file details.
In this algorithm, both Server A (when storing into MongoDB) and Server F (when it was sending it over to Server A) have read the file into the memory, and the request between the two servers was about the same size as the file. (Are 50Mb requests alright?)
However, one thing to consider is that -with this method- I would be using the ExpressJS style of API for the whole process and it would be consistent with the rest of the app where the /list, /details requests are also defined in the routes. I like consistency.
2. Socket.IO Way
In contrast to this algorithm, I've explored https://github.com/nkzawa/socket.io-stream way which broke away from the consistency of the HTTP API on Server A (as the handler for socket.io events are defined not in the routes but the file that has var server = http.createServer(app);).
Server F handles the form data as such in routes/some_route.js:
router.post('/', multer({dest: './uploads/'}).single('file'), function (req, res) {
var api_request = {};
api_request.name = req.body.name;
//add other fields to api_request ...
var has_file = req.hasOwnProperty('file');
var io = require('socket.io-client');
var transaction_sent = false;
var socket = io.connect('http://localhost:3000');
socket.on('connect', function () {
console.log("socket connected to 3000");
if (transaction_sent === false) {
var ss = require('socket.io-stream');
var stream = ss.createStream();
ss(socket).emit('transaction new', stream, api_request);
if (has_file) {
var fs = require('fs');
var filename = req.file.destination + req.file.filename;
console.log('sending with file: ', filename);
fs.createReadStream(filename).pipe(stream);
}
if (!has_file) {
console.log('sending without file.');
}
transaction_sent = true;
//get the response via socket
socket.on('transaction new sent', function (data) {
console.log('response from 3000:', data);
//there might be a better way to close socket. But this works.
socket.close();
console.log('Closed socket to 3000');
});
}
});
});
I said I'd be dealing with PDF files that are < 50Mb. However, if I use this program to send larger files in the future, is socket.io a better way to handle 1GB files as it's using stream?
This method does send the file and the details across but I'm new to this library and don't know if it should be used for this purpose or if there is a better way of utilizing it.
Final thoughts
What alternative methods should I explore?
Should I send the file over SCP and make an HTTP request with file details including where I've sent it- thus, separating the protocols of files and API requests?
Should I always use streams because they don't store the whole file into memory? (that's how they work, right?)
This https://github.com/liamks/Delivery.js ?
References:
File/Data transfer between two node.js servers this got me to try socket-stream way.
transfer files between two node.js servers over http for HTTP way
There are plenty of ways to achieve this , but not so much to do it right !
socket io and wesockets are efficient when you use them with a browser , but since you don't , there is no need for it.
The first method you can try is to use the builtin Net module of nodejs, basically it will make a tcp connection between the servers and pass the data.
you should also keep in mind that you need to send chunks of data not the entire file , the socket.write method of the net module seems to be a good fit for your case check it : https://nodejs.org/api/net.html
But depending on the size of your files and concurrency , memory consumption can be quite large.
if you are running linux on both servers you could even send the files at ground zero with a simple linux command called scp
nohup scp -rpC /var/www/httpdocs/* remote_user#remote_domain.com:/var/www/httpdocs &
You can even do this with windows to linux or the other way.
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
the client scp for windows is pscp.exe
Hope this helps !

Best NodeJS Workflow for team development

I'm trying to implement NodeJS and Socket.io for real time communication between two devices (PC & Smartphones) in my company product.
Basically what I want to achieve is sending a notification to all online users when somebody change something on a file.
All the basic functionality for saving the updates are already there and so, when everything is stored and calculated, I send a POST request to my Node server saying that something changed and he need to notify the users.
The problem now is that when I want to change some code in the NodeJS scripts, as long as I work alone, I can just upload the new files via FTP and just restart the pm2 service, but when my colleagues will start working with me on this story we will have problems merging our changes without overlapping each other.
Launching a local server is also not possible because we need the connection between our current server and the node machine and since our server is online it cannot access our localhosts.
It's there a way for a team to work together in the same Node server but without overlapping each other ?
Implement changes using some other option rather than FTP. For example:
You can use webdav-fs in authenticated or non-authenticated mode:
// Using authentication:
var wfs = require("webdav-fs")(
"http://example.com/webdav/",
"username",
"password"
);
wfs.readdir("/Work", function(err, contents) {
if (!err) {
console.log("Dir contents:", contents);
} else {
console.log("Error:", err.message);
}
});
putFileContents(remotePath, format, data [, options])
Put some data in a remote file at remotePath from a Buffer or String. data is a Buffer or a String. options has a property called format which can be "binary" (default) or "text".
var fs = require("fs");
var imageData = fs.readFileSync("someImage.jpg");
client
.putFileContents("/folder/myImage.jpg", imageData, { format: "binary" })
.catch(function(err) {
console.error(err);
});
And use callbacks to notify your team, or lock the files via the callback.
References
webdav-fs
webdav
lockfile
Choosing Secure Passwords

What "streams and pipe-capable" means in pkgcloud in NodeJS

My issue is to get image uploading to amazon working.
I was looking for a solution that doesnt save the file on the server and then upload it to Amazon.
Googling I found pkgcloud and on the README.md it says:
Special attention has been paid so that methods are streams and
pipe-capable.
Can someone explain what that means and if it is what I am looking for?
Yupp, that means you've found the right kind of s3 library.
What it means is that this library exposes "streams". Here is the API that defines a stream: http://nodejs.org/api/stream.html
Using node's stream interface, you can pipe any readable stream (in this case the POST's body) to any writable stream (in this case the S3 upload).
Here is an example of how to pipe a file upload directly to another kind of library that supports streams: How to handle POSTed files in Express.js without doing a disk write
EDIT: Here is an example
var pkgcloud = require('pkgcloud'),
fs = require('fs');
var s3client = pkgcloud.storage.createClient({ /* ... */ });
app.post('/upload', function(req, res) {
var s3upload = s3client.upload({
container: 'a-container',
remote: 'remote-file-name.txt'
})
// pipe the image data directly to S3
req.pipe(s3upload);
});
EDIT: To finish answering the questions that came up in the chat:
req.end() will automatically call s3upload.end() thanks to stream magic. If the OP wants to do anything else on req's end, he can do so easily: req.on('end', res.send("done!"))

Resources