I am trying to load building models into three.js. The format of models is JSON file generated by RvtVa3c, which is an add-in for Revit to generating JSON output. Then I used THREE.ObjectLoader() to load the model into three.js just like this example json loader three.js.
Everything is fine with a model with a size below 100MB. As I tried to load a 200MB model, Chrome throws "Aw, snap" error page. And Firefox throws allocation size overflow. Because the THREE.ObjectLoader() uses XHR to read the json file into String in one time and I guess the size of String is to large for java-script. The length of String is over 200,000,000 with 100MB JSON.
So I am seeking a way of loading the JSON file by stream. JSONStream in Node.js can handle 200MB JSON. Code example is displayed below.
var fs = require('fs'),
JSONStream = require('JSONStream'),
es = require('event-stream');
var getStream = function () {
var jsonData = 'buildingModel.js',
stream = fs.createReadStream(jsonData, {encoding: 'utf8'}),
parser = JSONStream.parse('*');
return stream.pipe(parser);
};
getStream()
.pipe(es.mapSync(function (data) {
console.log(data);
}));
As browser can not used require(/module/) . I tried used browserify to bundle JSONStream into my codes. However, fs can not be browserify.
Here is my questions:
Is using stream the best way to load extreme large JSON model in three.js? If not, what is the better solution?
Is fs can be browserify. Or I can create readable stream in browser with other ways?
Thank you for answering my quesitons.
Related
In my code, a function is returning a protobuf object and I want to save it in a file xyz.pb.
When I am trying to save it using fs.writefilesync it is not saving it.
It is circular in nature. So, I tried to save it using circular-json module to confirm if there is anything inside it and it has data.
But, as I used circular-json in the first place it doesn't have the proper information(not properly formatted) and it is of no use.
How can I save this protobuf in a file using nodejs?
Thanks!
you can try to use streams like mentioned in documentation
as following
const crypto = require('crypto');
const fs = require('fs');
const wstream = fs.createWriteStream('fileWithBufferInside');
// creates random Buffer of 100 bytes
const buffer = crypto.randomBytes(100);
wstream.write(buffer);
wstream.end();
or you can convert the buffer to JSON and save it in file as following:
const crypto = require('crypto');
const fs = require('fs');
const wstream = fs.createWriteStream('myBinaryFile');
// creates random Buffer of 100 bytes
const buffer = crypto.randomBytes(100);
wstream.write(JSON.stringify(buffer));
wstream.end();
and if your application logic doesn't require to use sync nature you should not use writeFileSync due to it will block your code until it will end so be careful.
try instead using writeFile or Streams it's more convenient.
The purpose of Protocol Buffers is to serialize strongly typed messages to binary format and back into messages. If you want to write a message from memory into a file, first serialize the message into binary and then write binary to a file.
NodeJS Buffer docs
NodeJS write binary buffer into a file
Protocol Buffers JavaScript SDK Docs
It should look something like this:
const buffer = messageInstance.serializeBinary()
fs.writeFile("filename.pb", buffer, "binary", callback)
I found how to easily save protobuf object in a file.
Convert the protobuf object into buffer and then save it.
const protobuf = somefunction(); // returning protobuf object
const buffer = protobuf.toBuffer();
fs.writeFileSync("filename.pb", buffer);
I'm trying to analyze a file I'll be uploading from react, I need to know if it can be uploaded based on several factors.
I found https://github.com/TooTallNate/node-wav
It works great on nodejs and I'm trying to use it on react. The sample creates a readable stream and pipes it to the wav reader.
var fs = require('fs');
var wav = require('wav');
var file = fs.createReadStream('track01.wav');
var reader = new wav.Reader();
// the "format" event gets emitted at the end of the WAVE header
reader.on('format', function (format) {
//Format of the file
console.log(format);
});
file.pipe(reader);
Using FilePond controller I'm able to get a base64 string of the file. But I can't figure out how to pass it to the reader
this is what I have so far on ReactJS:
var reader = new wav.Reader();
reader.on('format', function (format) {
//Format of file
console.log('format', format);
});
const buffer = new Buffer(base64String, 'base64')
const readable = new Readable()
readable._read = () => { }
readable.push(buffer)
readable.push(null)
readable.pipe(reader)
But I get Error: bad "chunk id": expected "RIFF" or "RIFX", got "u+Zj"
Since this file works on NodeJS with the same lib is obvious I'm doing something wrong.
EDIT:
this was a problem with my Base64 string, this method works if anyone needs to analyze a wav on the frontend
I have a paginated request that gives me a list of objects, which I later concat to get the full list of objects.
If I attempt to JSON.stringify this, it fails for large objects with range error. I was looking for a way to zlib.gzip to handle large JSON objects.
Try installing stream-json it will solve your problem, It's a great wrapper around streams and parsing a JSON.
//require the modules stream-json
const StreamArray = require('stream-json/utils/StreamArray');
// require fs if your using a file
const fs = require('fs');
const zlib = require('zlib');
// Create an instance of StreamArray
const streamArray = StreamArray.make();
fs.createReadStream('./YOUR_FILE.json.gz')
.pipe(zlib.createUnzip()) // Unzip
.pipe(streamArray.input); //Read the stream
//here you can do whatever you want with the stream,
//you can stream it to response.
streamArray.output.pipe(process.stdout);
In the example, I'm using a JSON (file) but you can use a collection and pass it to the stream.
Hope that's help.
My app needs to create a PDF file and then upload it to another server. The upload happens down the line via the post method from the request NPM package. Everything works fine if I pass in an fs.createReadStream:
const fs = require('fs');
const params = {file: fs.createReadStream('test.pdf')};
api.uploadFile(params);
Since PDFKit instantiates a read stream as well, I'm trying to pass that directly into the post params like this:
const PDFDocument = require('pdfkit');
const doc = new PDFDocument();
doc.text('steam test');
doc.end();
const params = {file: doc};
api.uploadFile(params);
However, this produces an error:
TypeError: Path must be a string. Received [Function]
If I look at PDFKit source code I see (in coffeescript):
class PDFDocument extends stream.Readable
I'm new to streams and it's clear I'm not understanding the difference here. To me if they are both readable streams, they should both be able to be passed in the same way.
I am a newbie to javascript.
What i am trying to do is to fetch data from the data base and then transmit it on the internet.
Now i can only read one entry at a time but i want to compress all the entries together rather than compressing one entry at a time.
I can either store all of them in an array and then pass this array to zlib function. but this take up alot of time and memory.
Is it somehow possible to compress the data while transmitting it in node js with express api at the same time as it is being read, sort of like streaming servers, who on real time compress data while retrieving it from memory and then transmitting it over to the client
It's certainly possible. You can play around with this example:
var express = require('express')
, app = express()
, zlib = require('zlib')
app.get('/*', function(req, res) {
res.status(200)
var stream = zlib.createGzip()
stream.pipe(res)
var count = 0
stream.write('[')
;(function fetch_entry() {
if (count > 10) return stream.end(']')
stream.write((count ? ',' : '') + JSON.stringify({
_id: count,
some_random_garbage: Math.random(),
}))
count++
setTimeout(fetch_entry, 100)
})()
})
app.listen(1337)
console.log('run `curl http://localhost:1337/ | zcat` to see the output')
I assume you're streaming JSON, and setTimeout calls would need to be replaced with actual database calls of course. But the idea stays the same.
I'd recommend to use node.js's pipe.
Here is an example of pipe streaming with zlib (compression): it reads a file, compresses it and writes it to a new file.
var gzip = zlib.createGzip();
var fs = require('fs');
var inp = fs.createReadStream('input.txt');
var out = fs.createWriteStream('input.txt.gz');
inp.pipe(gzip).pipe(out);
You can change the input to come from your database input and change the output to be the HTTP response.
ref : http://nodejs.org/api/stream.html
ref : http://nodejs.org/api/zlib.html