Sending compressed data in from as3 to node.js express - node.js

I'm trying to send a compressed string to a node.js express server from Flash AS3. In AS3 I have something like this:
// AS3 Code
function save(data:ByteArray, theURL:String, compress:Boolean):void {
if (compress == true)
{
data.compress();
}
try
{
var request:URLRequest=new URLRequest(theURL);
request.data= data;
request.method=URLRequestMethod.POST;
request.contentType = "application/octet-stream";
var loader:URLLoader = new URLLoader();
loader.load(request);
}
catch (ex)
{
}
}
Testing it like this:
save("This is a test string", "http://example.com/flash/", true);
Node Code
app.post('/flash/', bodyParser.raw({limit: '5mb'}), function(req,res){
console.log(req.body);
zlib.gunzip(req.body, function(err, unzipped_body){
if (!err)
{
console.log(unzipped_body);
}
console.log(err);
});
});
req.body ends up having a byte buffer in it but when I try to zlib.gunzip I get an incorrect header check.
<Buffer 78 da 0b c9 c8 2c 56 00 a2 dc 4a 85 94 c4 92 44 00 2a 56 05 55>
x?♂??,V ??J??ED *V♣U
{ [Error: incorrect header check] errno: -3, code: 'Z_DATA_ERROR' }
If I don't do the data.compress() in as3 then req.body in Node shows the original string. What am I not understanding about the zlib compression, or node, or whatever? :)

Turns out it should be zlib.unzip and not zlib.gunzip. I suppose the as3 compress() function isn't "gzip" format but rather "zlib" format?

Related

How to concatenate/join audio buffer arrays (text-to-speech results) into one on nodejs?

I want to convert many texts into one audio, but I'm confused how to concatenate many audios into single audio file (You can't convert a long text into an audio due to 5k chars/request limit).
My current code is below. It generates multiple audio byte arrays, but fails to merge mp3 audios because it ignores head/meta information. Is it recommended to use LINEAR16 in TTS field? I'm happy to hear any suggestion. Thank you.
const client = new textToSpeech.TextToSpeechClient();
const promises = ['hi','world'].map(text => {
const requestBody = {
audioConfig: {
audioEncoding: 'MP3'
},
input: {
text: text,
},
voice: {
languageCode: 'en-US',
ssmlGender: 'NEUTRAL'
},
};
return client.synthesizeSpeech(requestBody)
})
const responses = await Promise.all(promises)
console.log(responses)
const audioContents = responses.map(res => res[0].audioContent)
const audioContent = audioContents.join() // this line has a problem
standard output
[
[
{
audioContent: <Buffer ff f3 44 c4 00 12 a0 01 24 01 40 00 01 7c 06 43 fa 7f 80 38 46 63 fe 1f 00 33 3f c7 f0 03 03 33 1f c1 f0 0c eb fa 3f 03 20 7e 63 f3 78 03 ba 64 73 e0 ... 2638 more bytes>
},
null,
null
],
[
{
audioContent: <Buffer ff f3 44 c4 00 12 58 05 24 01 41 00 01 1e 02 23 9e 1f e0 1f 83 83 df ef 80 e8 ff 99 f0 0c 00 e8 7f c3 68 03 cf fd f8 8f ff 0f 3c 7f 88 f8 8c 87 e0 23 ... 2926 more bytes>
},
null,
null
]
]
Workaround-1
As I mentioned in the comment, there is a google-tts-concat-ssml package in the node for your requirement, which is not a Google official package. It would automatically make multiple requests based on the 5K character limit to the API and concatenate the resulting audio into a single audio file. Before executing the code, install the following client libraries:
npm install #google-cloud/text-to-speech
npm install google-text-to-speech-concat --save
Try the below code by adding less than 5k characters between the <p></p> tag for each request to the API. For example, if you have 9K characters, then it would need to be split into 2 or more requests, so add the first 5K characters between <p></p> tag and then next add the remaining 4k characters between the new <p></p> tag. So, by using the google-text-to-speech-concat package, the API returned audio files are concatenated into a single audio file.
const textToSpeech =require('#google-cloud/text-to-speech');
const testSynthesize =require('google-text-to-speech-concat');
const fs = require('fs');
const path= require('path');
(async () => {
const request = {
voice: {
languageCode: 'en-US',
ssmlGender: 'FEMALE'
},
input: {
ssml: `
<speak>
<p>add less than 5k chars between paragraph tags</p>
<p>add less than 5k chars between paragraph tags</p>
</speak>`
},
audioConfig: {
audioEncoding: 'MP3'
}
};
try {
// Create your Text To Speech client
// More on that here: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
const textToSpeechClient = new textToSpeech.TextToSpeechClient({
keyFilename: path.join(__dirname, 'google-cloud-credentials.json')
});
// Synthesize the text, resulting in an audio buffer
const buffer = await testSynthesize.synthesize(textToSpeechClient, request);
// Handle the buffer
// For example write it to a file or directly upload it to storage, like S3 or Google Cloud Storage
const outputFile = path.join(__dirname, 'Output.mp3');
// Write the file
fs.writeFile(outputFile, buffer, 'binary', (err) => {
if (err) throw err;
console.log('Got audio!', outputFile);
});
} catch (err) {
console.log(err);
}
})();
Workaround-2
Try the below code to split the entire text into sets of 5K characters and send them to the API for conversion. This creates multiple audio files, as you know. Before executing the code, create a folder in your current working directory to store the output audio files.
const textToSpeech = require('#google-cloud/text-to-speech');
const fs = require('fs');
const util = require('util');
// Creates a client
const client = new textToSpeech.TextToSpeechClient();
(async function () {
// The text to synthesize
var text = fs.readFileSync('./text.txt', 'utf8');
var newArr = text.match(/[^\.]+\./g);
var charCount = 0;
var textChunk = "";
var index = 0;
for (var n = 0; n < newArr.length; n++) {
charCount += newArr[n].length;
textChunk = textChunk + newArr[n];
console.log(charCount);
if (charCount > 4600 || n == newArr.length - 1) {
console.log(textChunk);
// Construct the request
const request = {
input: {
text: textChunk
},
// Select the language and SSML voice gender (optional)
voice: {
languageCode: 'en-US',
ssmlGender: 'MALE',
name: "en-US-Wavenet-B"
},
// select the type of audio encoding
audioConfig: {
effectsProfileId: [
"headphone-class-device"
],
pitch: -2,
speakingRate: 1.1,
audioEncoding: "MP3"
},
};
// Performs the text-to-speech request
const [response] = await client.synthesizeSpeech(request);
console.log(response);
// Write the binary audio content to a local file
const writeFile = util.promisify(fs.writeFile);
await writeFile('result/Output' + index + '.mp3', response.audioContent, 'binary');
console.log('Audio content written to file: output.mp3');
index++;
charCount = 0;
textChunk = "";
}
}
}());
For merging the output audio files into a single audio file, the audioconcat package can be used, which is not a Google official package. You can also use other similar available packages to concat the audio files.
To use this audioconcat library requires that the ffmpeg application (not the ffmpeg NPM package) is already installed. So, install the ffmpeg tool based on your OS and install the following client libraries before executing the code for concatenating audio files:
npm install audioconcat
npm install ffmpeg --enable-libmp3lame
Try the below code, it concatenates all the audio files from your output directory and stores the single concatenated output.mp3 audio file in your current working directory.
const audioconcat = require('audioconcat')
const testFolder = 'result/';
const fs = require('fs');
var array=[];
fs.readdirSync(testFolder).forEach(songs => {
array.push("result/"+songs);
console.log(songs);
});
audioconcat(array)
.concat('output.mp3')
.on('start', function (command) {
console.log('ffmpeg process started:', command)
})
.on('error', function (err, stdout, stderr) {
console.error('Error:', err)
console.error('ffmpeg stderr:', stderr)
})
.on('end', function (output) {
console.error('Audio successfully created', output)
})
For both the workarounds, I tested codes from the various GitHub links and modified the code as per your requirement. Here are the links for your reference.
Workaround-1
Workaround-2

Uploading and Deserializing protobuf data in AWS Lambda

I have a requirement to send protobuf data to an AWS Lambda written in Node.js.
I am experimenting with a "Hello World" example where I serialize and deserialize a Person messge.
Example:
person.proto
syntax = "proto3";
message Person {
required int32 id = 1;
required string name = 2;
optional string email = 3;
}
Using Node.js and the package protobufjs I can generate the code from the proto file and serialize and deserialize a Person object to a file:
let person = personProto.Person.create();
person.id = 42;
person.name = 'Fred';
person.email = "fred#foo.com";
console.log("Person BEFORE Serialze=" + JSON.stringify(person1,null,2));
// Serialize
let buffer = personProtos.Person.encode(person1).finish();
console.log(buffer);
fs.writeFileSync("person.pb", buffer, "binary");
// Deserialize
let bufferFromFile = fs.readFileSync("person.pb");
let decodedPerson = personProtos.Person.decode(bufferFromFile);
console.log("Decoded Person=\n" + JSON.stringify(decodedPerson,null,2));
Output:
Person BEFORE Serialize={
"id": 42,
"name": "Fred",
"email": "fred#foo.com"
}
<Buffer 08 2a 12 04 46 72 65 64 1a 0c 66 72 65 64 40 66 6f 6f 2e 63 6f 6d>
Decoded Person=
{
"id": 42,
"name": "Fred",
"email": "fred#foo.com"
}
Using Postman, I want to upload the binary protobuf data to an AWS Lambda from the person.pb file and deserialize it in the Lambda.
When I specify the body as "binary" type and specify the person.pb file, the person data shows up in the Lambda event body as:
"body": "\b*\u0012\u0004Fred\u001a\ffred#foo.com"
It looks like it got transformed into Unicode and encoded?
How can I take the body string value and turn it back into the Node.js buffer:
<Buffer 08 2a 12 04 46 72 65 64 1a 0c 66 72 65 64 40 66 6f 6f 2e 63 6f 6d>
so that I can deserialize it back to the JSON object in my Lambda code?
I put the generated code from the .proto file into my Lambda so I can call:
let bufferFromEvent = event.body; <== how do I get a buffer from this?
let decodedPerson = personProtos.Person.decode(bufferFromEvent);
Thanks
The answer is what Daniel mentioned in the comment below your question. You need to use the Buffer class provided by Node.
Your lambda function will then look something like this.
const personProtos = require("./personProtos");
module.exports.handler = async event => {
const buffer = new Buffer.from(event.body);
console.log(personProtos.Person.decode(buffer).toObject());
return {
statusCode: 200,
body: "Success"
};
};

Using gridfs to store uploaded file with its metadata in Node / Express

I know there's a few threads about this, but I couldn't find my answer exactly. So using post i have managed to get this file object to the server side
{ fileToUpload:
{ name: 'resume.pdf',
data: <Buffer 25 50 44 46 2d 31 2e 33 0a 25 c4 e5 f2 e5 eb a7 f3 a0 d0 c4 c6 0a 34 20 30 20 6f 62 6a 0a 3c 3c 20 2f 4c 65 6e 67 74 68 20 35 20 30 20 52 20 2f 46 69 ... >,
encoding: '7bit',
mimetype: 'application/pdf',
mv: [Function] } }
How do I save this along with the metadata using mongoose & gridfs? In most threads I've looked at so far, gridfs-stream was used given a temporary path of the file, which I don't have. Could someone help me save this file by streaming the data along with its metadata + given an example of how I would retrieve it & send it back to the clientside?
I must've been tired, I was using the express-fileupload as a middleware but not using it to save the file which is done with the mv function in the object. Using code below to save file locally and then streaming it to mongo using gridfs-stream
var file = req.files.fileToUpload;
file.mv('./uploads/'+file.name, function(err) {
if (err) {
res.send(err, 500);
}
else {
res.send('File uploaded!');
var gfs = Grid(conn.db);
// streaming to gridfs
//filename to store in mongodb
var writestream = gfs.createWriteStream({
filename: file.name
});
fs.createReadStream('./uploads/'+file.name).pipe(writestream);
writestream.on('close', function (file) {
// do something with `file`
console.log(file.filename + ' Written To DB');
});
}
});

NodeJS TypeError argument should be a Buffer only on Heroku

I am trying to upload an image to store on MongoDB through Mongoose.
I am using multiparty to get the uploaded file.
The code works 100% perfectly on my local machine, but when I deploy it on Heroku, it gives the error:
TypeError: argument should be a Buffer
Here is my code:
exports.create = function (req, res) {
'use strict';
var form = new multiparty.Form();
form.parse(req, function (err, fields, files) {
var file = files.file[0],
contentType = file.headers['content-type'],
body = {};
_.forEach(fields, function (n, key) {
var parsedField = Qs.parse(n)['0'];
try {
parsedField = JSON.parse(parsedField);
} catch (err) {}
body[key] = parsedField;
});
console.log(file.path);
console.log(fs.readFileSync(file.path));
var news = new News(body);
news.thumbnail = {
data: new Buffer(fs.readFileSync(file.path)),
contentType: contentType
};
news.save(function (err) {
if (err) {
return handleError(res, err);
}
return res.status(201);
});
});
};
This is the console logs in the above code for HEROKU:
Sep 26 17:37:23 csgowin app/web.1: /tmp/OlvQLn87yfr7O8MURXFoMyYv.gif
Sep 26 17:37:23 csgowin app/web.1: <Buffer 47 49 46 38 39 61 10 00 10 00 80 00 00 ff ff ff cc cc cc 21 f9 04 00 00 00 00 00 2c 00 00 00 00 10 00 10 00 00 02 1f 8c 6f a0 ab 88 cc dc 81 4b 26 0a ... >
The is the console logs on my LOCAL MACHINE:
C:\Users\DOLAN~1.ADS\AppData\Local\Temp\TsfwadjjTbJ8iT-OZ3Y1_z3L.gif
<Buffer 47 49 46 38 39 61 5c 00 16 00 d5 36 00 bc d8 e4 fe fe ff ae cf df dc ea f1 fd fe fe db e9 f1 ad ce de 46 5a 71 2b 38 50 90 b8 cc 4a 5f 76 9a c3 d7 8f ... >
Does Heroku need any settings or configurations or something?
Sounds like the object passed is not a buffer when
data: new Buffer(fs.readFileSync(file.path)) is executed. Probably a difference in how your local environment is handling file writes or it could be how multiparty is handling streams.
This code works flawlessly for me:
news.thumbnail = {
media: fs.createReadStream(fileLocation),
contentType: contentType
};
But you also have to make sure your file has been saved from upload before you can use the file in the above createReadStream method. Things are inconsistent with Node, sometimes this happens synchronously and sometimes not. Ive used Busboy to handle the fileupload since it handles streams and creates a handler when the file stream is complete. Sorry, based on the above I cannot tell you where your issue is so ive included two solutions for you to try :))
Busboy: https://www.npmjs.com/package/busboy
Ive used this after the file has been uploaded to the temp directory in busboy:
//Handles file upload and stores to a more permanent location.:
//This handles streams.
// request is given by express.
var busboy = new Busboy({ headers: request.headers });
var writeStream;
busboy.on('file', function(fieldname, file, filename, encoding, mimetype) {
writeStream = file.pipe(fs.createWriteStream(saveTo));
})
.on('finish', function() {
writeStream = file.pipe(fs.createWriteStream(saveTo));
writeStream.on('close', function(){
//use the file
});
});

encoding is ignored in fs.readFile

I am trying to read the contents of a properties file in node. this is my call:
fs.readFile("server/config.properties", {encoding: 'utf8'}, function(err, data ) {
console.log( data );
});
The console prints a buffer:
<Buffer 74 69 74 69 20 3d 20 74 6f 74 6f 0a 74 61 74 61 20 3d 20 74 75 74 75>
when I replace the code with this:
fs.readFile("server/config.properties", function(err, data ) {
console.log( data.toString('utf8') );
});
it works fine. But the node documentation says the String is converted to utf8 if the encoding is passed in the options
the output of node --version is v0.10.2
What am I missing here?
thank you for your support
Depending on the version of Node you're running, the argument may be just the encoding:
fs.readFile("server/config.properties", 'utf8', function(err, data ) {
console.log( data );
});
The 2nd argument changed to options with v0.10:
FS readFile(), writeFile(), appendFile() and their Sync counterparts now take an options object (but the old API, an encoding string, is still supported)
For former documentation:
v0.8.22
v0.6.21
You should change {encoding: 'utf8'} to {encoding: 'utf-8'}, for example:
fs.readFile("server/config.properties", {encoding: 'utf-8'}, function(err, data ) {
console.log( data );
});

Resources