How to encrypt a stream in node js - node.js

How to encrypt and decrypt a stream in node js without saving the file locally or converting it into buffer.
if there is no way then pls provide memory efficient and less storage consuming way to encrypt and decrypt stream in node js so that i can directly upload stream to google drive through api .

Finally i got solution to this question . I am posting the answer here for everyone who need this
I am using a library called aes-encrypt-stream
or you can use crpto library as well or any other.
Here
use stream.PassThrough
Here file.stream is my input stream
stream is my outputstream
From this code i am able to reduce memory consumption of server for encryption
const { createEncryptStream, createDecryptStream, setPassword } = require('aes-encrypt-stream');
setPassword(Buffer.from('your key here', 'hex'));
PassThroughStream = require('stream').PassThrough,
stream = new PassThroughStream();
await createEncryptStream(file.stream).pipe(stream);
now stream is encrypted

Related

Stream files uploaded by client

Background: Stream Consumption
This is how you can consume and read a stream of data-bytes that is received in the client:
Get a Response object (example a fetch response.)
Retrieve the ReadableStream from the body field of it i.e response.body
As body is an instance of ReadableStream, we can do body.getReader()
And use the reader as reader.read()
That is simple enough, we can consume the stream of bytes coming from the server and we can consume it the same way in NodeJS and Browser through the Node Web Stream API.
Stream from path in NodeJS
To create a stream from URLPath in Node JS it is quite easy (you just pass the path to a stream.) You can also switch from NodeStream to WebStream with ease (Stream.toWebStream())
Problem: Stream from user upload action
Can we get a stream right from user file uploads in the browser ? Can you give a simple example ?
The point of it would be to process the files as the user is uploading them, not to use the Blob stream.
Is it possible analyze user data as it is being uploaded i.e is it possible for it to be a stream ?

Download file from s3 without write it to file system in nodejs

I have a Nodejs server running with Hapi.
one of the job of the server is to send files to servicer API (the API only accept streams when I send buffer it return an error) on the user ask
All the files are stored in s3.
When I download them if I'm using promise(),
I get in the body buffer.
And I can get passthrough if I'm using createReadStream().
My problem is when I try to convert the buffer to stream and send it the API reject it, and the same when I use the createReadStream() result,
but when I use FS to save the file and then FS to read the API accept the stream and its work.
so I need help how can I create the same result without saving and reading the file.
edit:
here is my code I know it's the wrong way but it works I need a better way that will work
static async downloadFile(Bucket, Key) {
const result = await s3Client
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`${Path.basename(Key)}`,result.Body);
const file = await fs.createReadStream(`${Path.basename(Key)}`);
return file;
}
If I understand it correctly, you want to get the object from the s3 bucket and stream to your HTTP response as the stream.
Instead of getting the data in the buffers and than figuring out the way to convert it to stream can be complicated and has its limitations, if you really want to leverage the power of streams then don't try to convert it to buffer and load the entire object to the memory, you can create a request that streams the returned data directly to a Node.js Stream object by calling the createReadStream method on the request.
Calling createReadStream returns the raw HTTP stream managed by the request. The raw data stream can then be piped into any Node.js Stream object.
This technique is useful for service calls that return raw data in their payload, such as calling getObject on an Amazon S3 service object to stream data directly into a file, as shown in this example.
//I Imagine you have something similar.
server.get ('/image', (req, res) => {
let s3 = new AWS.S3({apiVersion: '2006-03-01'});
let params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
let readStream= s3.getObject(params).createReadStream();
// When the stream is done being read, end the response
readStream.on('close', () => {
res.end()
})
readStream.pipe(res);
});
When you stream data from a request using createReadStream, only the raw HTTP data is returned. The SDK does not post-process the data, this raw HTTP data can be directly returned.
Note:
Because Node.js is unable to rewind most streams, if the request initially succeeds, then retry logic is disabled for the rest of the response. In the event of a socket failure, while streaming, the SDK won't attempt to retry or send more data to the stream. Your application logic needs to identify such streaming failures and handle them.
Edits:
After the edits on the original question, I can see that s3 sends a PassThrough stream object which is different from a FileStream in Nodejs. So to get around the problem use the memory (If your files are not very big and or you have enough memory).
Use the package memfs, it will replace the native fs in your app
https://www.npmjs.com/package/memfs
Install the package by npm install memfs and require as follows:
const {fs} = require('memfs');
and your code will look like
static async downloadFile(Bucket, Key) {
const result = await s3
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`/${Key}`,result.Body);
const file = await fs.createReadStream(`/${Key}`);
return file;
}
Note that the only change I have made in your functions is that I have changed the path ${Path.basename(Key)} to /${Key}, because now you don't need to know the path of your original filesystem we are storing files in memory. I have tested and this solution works

Download Video Streams from Remote url using NightmareJs

I am trying to build a scraper to download video streams and and save them in a private cloud instance using NightMareJs (http://www.nightmarejs.org/)
I have seen the documentation and it shows how to download simple files like this -
.evaluate(function ev(){
var el = document.querySelector("[href*='nrc_20141124.epub']");
var xhr = new XMLHttpRequest();
xhr.open("GET", el.href, false);
xhr.overrideMimeType("text/plain; charset=x-user-defined");
xhr.send();
return xhr.responseText;
}, function cb(data){
var fs = require("fs");
fs.writeFileSync("book.epub", data, "binary");
})
-- based on the SO post here -> Download a file using Nightmare
But I want to download video streams using NodeJs async streams api. Is there a way to open a stream from a remote url and pipe it to local / other remote writable stream using NodeJs inbuilt stream apis
You can check if the server sends the "Accept-Ranges" (14.5) and "Content-Length" (14.13) headers through a HEAD request to that file, then request smaller chunks of the file you're trying to download using the "Content-Range" (14.16) header and write each chunk to the target file (you can use appending mode in order to reduce management of the file stream).
Of course, this will be quite slow if you're requesting very small chunks sequentially. You could build a pool of requestors (e.g. 4) and only write the next correct chunk to the file (so the other requestors would not take on future chunks if they are already done downloading).

Store WebM file in Redis (NodeJS)

I'm searching a solution to store a WebM file into Redis.
Let's explain the situation:
The NodeJS server receive a WebM file from a client, and save it into server file system.
Then it have to save this file in redis, because I don't want to manage redis and file system too. In this way I can delete the video just with redis command.
I think to read file with fs.readFile() and then save it into a Buffer, but I don't know witch encode format to use, and I don't know how to refer this process to give back the WebM video to a client when it make a request.
Is this a good way to proceed? Any suggestion?
PS: I use formidable to upload file.
EDIT: I found a way to proceed, but theres another problem:
var file = fs.readFileSync("./video.webm");
client.set("video1", file1, function(){
client.get("video1", function(err, data) {
var buffer = new Buffer(data, 'binary');
// file ≠ buffer
});
});
Is this an encode problem? Like unicode/UTF8/ASCII?
Maybe node and redis use different encode?
Solution found!
The problem became when you create the client object.
Usually this is what is done
var client = redis.createClient();
And return_buffers param will be set as false.
In this way
var client = redis.createClient(6379, '127.0.0.1', {
return_buffers: true,
auth_pass: null
});
everything gone right! ;)
this is the issue page they help me
I don't know much about NodeJS and WebM files.
Redis stores C char type on 8 bit String, so it should be binary friendly. Check the js code and configuration to ensure your js redis client sends / receives data as bytearray and not as UTF-8 string, probably there is a bad conversion in JS of data.

What "streams and pipe-capable" means in pkgcloud in NodeJS

My issue is to get image uploading to amazon working.
I was looking for a solution that doesnt save the file on the server and then upload it to Amazon.
Googling I found pkgcloud and on the README.md it says:
Special attention has been paid so that methods are streams and
pipe-capable.
Can someone explain what that means and if it is what I am looking for?
Yupp, that means you've found the right kind of s3 library.
What it means is that this library exposes "streams". Here is the API that defines a stream: http://nodejs.org/api/stream.html
Using node's stream interface, you can pipe any readable stream (in this case the POST's body) to any writable stream (in this case the S3 upload).
Here is an example of how to pipe a file upload directly to another kind of library that supports streams: How to handle POSTed files in Express.js without doing a disk write
EDIT: Here is an example
var pkgcloud = require('pkgcloud'),
fs = require('fs');
var s3client = pkgcloud.storage.createClient({ /* ... */ });
app.post('/upload', function(req, res) {
var s3upload = s3client.upload({
container: 'a-container',
remote: 'remote-file-name.txt'
})
// pipe the image data directly to S3
req.pipe(s3upload);
});
EDIT: To finish answering the questions that came up in the chat:
req.end() will automatically call s3upload.end() thanks to stream magic. If the OP wants to do anything else on req's end, he can do so easily: req.on('end', res.send("done!"))

Resources