I am having some issues converting a payload received (req.body) to a correctly formatted base64 string.
The payload is received and looks like the below example. I know that it's encrypted coming in but I'm wondering if there is anything that could be happening node server side that makes it look like this, it seems malformed and not how it should be
body: '&R۽5�l|L�\u001b�\u0014햱����\u00020#��[cV[AD&P�\u0001��˯���\n' +
`#B軉�6Y¤�\u0010�l�\u0012"D�dʦ�nb�g���\u0017����߉�{�a\u000e�:��\u0014\u0005�4\u0018!��u\u001e��s!վ]�\u0011KɆ�<!\u001d��#a�Ӿǥ+\f�iWEź�����:^�Վߎ�NP�M�G�_x�}�b1W�t�\u000f?*�2N�s��\u0000\u0015\u001e��o��� |y.\u0004n�e��64z�eu3\u0007(��j�R�\u0001 jzO\u0012�IF\u0002��w_����%�\u001b\u0010��\u0010��5�\u0016�.1�\u0006�\f\u0014�$�|\u000e�E�5�����o�MΆA\u001a��\u0010������-ܹ��\u0003�jV�0b\u0002�\u001f��\\^"\\���\u0000��%�̓B�TfI��3��2U���[#�ۍ�'bT�]�\u0007�������\u0016 �P��x?\u0014�ly*8\u00134�NR����<��\u0012^�"#�V���!\u0010=�\u0006�"r�c�a�/L���vq�<\u0015�\u0006H��\u0014�\u001f�m�~�Ֆ�\u0011>L+����Yw���٘�\u0007ur�&�i�B4\n` +
If I convert the payload to a base64 string then i get something like this (the + and / characters do not seem right)
var encryptedBytes = Buffer.from(req.body);
var encryptedStr = encryptedBytes.toString('base64');
console.log({encryptedStr});
{ encryptedStr: 'JlLbvTXvv71sfEzvv70b77+9fxTtlrHvv73vv73vv73vv70CMCPvv73vv71bY1ZbQUQmUO+/vQHvv73vv73Lr++/ve+/ve+/vQojQui7ie+/vTZZwqTvv70Q77+9bO+/vRIiRO+/vWTKpu+/vW5i77+9Z++/ve+/ve+/vRfvv73vv73vv73vv73fie+/vXvvv71hDu+/vTrvv73vv70UBe+/vTQYIe+/ve+/vXUe77+977+9cyHVvl3vv70RS8mG77+'}
If I compare this to a base64 string I grab from the request on an iOS device for example, these seem rather different, plus this base64 string below can be decrypted successfully, which implies the issue could be within node?
{ base64data: 'ZGRIUUJhR0dMc3BTVFdQSHppS3BZUDY1UmJWSkFmbnRpekg1a29nUnlFMGtZemExU0RwS1h0VHlNd1lHMnhRcEZiMjEzNEwwYXduNllHR1p0aU1HM3YzcWlyTnlSd2RWSmNHQldMRVVMWklWaGRpNzBNWHVPNkZaSnJBUWZ6YnBJbERESzBiTEpoUGVCS3ZiU1d2NnRIcktIb'}
So my question here, is there something i need to do node server side to correctly parse or translate the req.body ready to be converted into a base64 string?
Related
I created a NodeJS application which should get some data from an external API-Server. That server provides its data only as 'Content-Type: text/plain;charset=ISO-8859-1'. I have got that information through the Header-Data of the server.
Now the problem for me is that special characters like 'ä', 'ö' or 'ü' are shown as �.
I tried to convert them with Iconv to UTF-8, but then I got these things '�'...
My question is, what am I doing wrong?
For testing I use Postman. These are the steps I do to test everything:
Use Postman to trigger my NodeJS application
The App requests data from the API-Server
API-Server sends Data to NodeJS App
My App prints out the raw response-data of the API, which already has those strange characters �
The App then tries to convert them with Iconv to UTF-8, where it shows me now this '�' characters
Another strange thing:
When I connect Postman directly to the API-Server, the special characters get shown as they have too without problems. Therefore i guess my application causes the problem but I cannot see where or why...
// Javascript Code:
try {
const response = await axios.get(
URL
{
params: params,
headers: headers
}
);
var iconv = new Iconv('ISO-8859-1', 'UTF-8');
var converted = await iconv.convert(response.data);
return converted.toString('UTF-8');
} catch (error) {
throw new Error(error);
}
So after some deeper research I came up with the solution to my problem.
The cause of all trouble seems to lie within the post-process of axios or something similar. It is the step close after data is received and convertet to text and shortly before the response is generated for my nodejs-application.
What I did was to define the "responseType" of the GET-method of axios as an "ArrayBuffer". Therefore an adjustment in axios was necessary like so:
var resArBuffer = await axios.get(
URL,
{
responseType: 'arraybuffer',
params: params,
headers: headers
}
);
Since JavaScript is awesome, the ArrayBuffer provides a toString() method itself to convert the data from ArrayBuffer to String by own definitions:
var response = resArBuffer.data.toString("latin1");
Another thing worth mentioning is the fact that I used "latin1" instead of "ISO-8859-1". Don't ask me why, some sources even recommended to use "cp1252" instead, but "latin1" workend for me here.
Unfortunately that was not enough yet since I needed the text in UTF-8 format. Using "toString('utf-8')" itself was the wrong way too since it would still print the "�"-Symbols. The workaround was simple. I used "Buffer.from(...)" to convert the "latin1" defined text into a "utf-8" text:
var text = Buffer.from(response, 'utf-8').toString();
Now I get the desired UTF-8 converted text I needed. I hope this thread helps anyone else outhere since thse informations hwere spread in many different threads for me.
So, I'm trying to pass an image to a Node Lambda through API Gateway and this is automatically base64 encoded. This is fine, and my form data all comes out correct, except somehow my image is being corrupted, and I'm not sure how to decode this properly to avoid this. Here is the relevant part of my code:
const multipart = require('aws-lambda-multipart-parser');
exports.handler = async (event) => {
console.log({ event });
const buff = Buffer.from(event.body, 'base64');
// using utf-8 appears to lose some of the data
const decodedEventBody = buff.toString('ascii');
const decodedEvent = { ...event, body: decodedEventBody };
const jsonEvent = multipart.parse(decodedEvent, false);
const asset = Buffer.from(jsonEvent.file.content, 'ascii');
}
First off, it would be good to know if aws-sdk had a way of parsing the multipart form data rather than using this unsupported third party code. Next, the value of asset ends with a buffer that's exactly the same size as the original file, but some of the byte values are off. My assumption is that the way this is being encoded vs decoded is slightly different and maybe some of the characters are being interpreted differently.
Just an update in case anybody else runs into a similar problem - updated 'ascii' to 'latin1' in both places and then it started working fine.
I have the following code:
const notifications = await axios.get(url)
const ctype = notifications.headers["content-type"];
The ctype receives "text/json; charset=iso-8859-1"
And my string is like this: "'Ol� Matheus, est� pendente.',"
How can I decode from iso-8859-1 to utf-8 without getting those erros?
Thanks
text/json; charset=iso-8859-1 is not a valid standard content-type. text/json is wrong and JSON must be UTF-8.
So the best way to get around this at least on the server, is to first get a buffer (does axios support returning buffers?), converting it to a UTF-8 string (the only legal Javascript string) and only then run JSON.parse on it.
Pseudo-code:
// be warned that I don't know axios, I assume this is possible but it's
// not the right syntax, i just made it up.
const notificationsBuffer = await axios.get(url, {return: 'buffer'});
// Once you have the buffer, this line _should_ be correct.
const notifications = JSON.parse(notificationBuffer.toString('ISO-8859-1'));
I'm developing my understanding of servers by writing a webdev framework using vanilla node.js. For the first time a situation arose where a french character was included in a json response from the server, and this character showed up as an unrecognized symbol (a question-mark within a diamond on chrome).
The problem was the encoding, which was being specified here:
/*
At this stage we have access to "response", which is an object of the
following format:
response = {
data: 'A string which contains the data to send',
encoding: 'The encoding type of the data, e.g. "text/html", "text/json"'
}
*/
var encoding = response.encoding;
var length = response.data.length;
var data = response.data;
res.writeHead(200, {
'Content-Type': encoding,
'Content-Length': length
});
res.end(data, 'binary'); // Everything is encoded as binary
The problem was that everything sent by the server is encoded as binary, which ruins the ability to display certain characters. The fix seemed simple; include a boolean binary value in response, and adjust the 2nd parameter of res.end accordingly:
/*
At this stage we have access to "response", which is an object of the
following format:
response = {
data: 'A string which contains the data to send',
encoding: 'The encoding type of the data, e.g. "text/html", "text/json"',
binary: 'a boolean value determining the transfer encoding'
}
*/
.
.
.
var binary = response.binary;
.
.
.
res.end(data, binary ? 'binary' : 'utf8'); // Encode responses appropriately
Here is where I have produced some very, very strange behavior. This modification causes french characters to appear correctly, but occasionally causes the last character of a response to be omitted on the client-side!!!
This bug only happens once I host my application on heroku. Locally, the last character is never missing.
I noticed this bug because certain responses (not all of them!) now break the JSON.parse call on the client-side, although they are only missing the final } character.
I have a horrible band-aid solution right now, which works:
var length = response.data.length + 1;
var data = response.data + ' ';
I am simply appending a space to every single response sent by the server. This actually causes all text/html, text/css, text/json, and application/javascript responses to work because they can tolerate the unnecessary whitespace, but I hate this solution and it will break other Content-Types!
My question is: can anyone give me some insight into this problem?
If you're going to explicitly set a Content-Length, you should always use Buffer.byteLength() on the body to calculate the length, since that method returns the actual number of bytes in the string and not the number of characters like the string .length property will return.
I'm having trouble serving binary data from node. I worked on a node module called node-speak which does TTS (text to Speech) and return a base64 encoded audio file.
So far I'm doing this to convert from base64 to Buffer/binary and then serve it:
// var src = Base64 data
var binAudio = new Buffer(src.replace("data:audio/x-wav;",""), 'base64');
Now I'm trying to serve this audio from node with the headers like so:
res.writeHead(200, {
'Content-Type': 'audio/x-wav',
'Content-Length': binAudio.length
});
And serving it like so:
res.end(binAudio, "binary");
But its not working at all. Is there something I havnt quite understood or am I doing something wrong, because this is not serving a valid audio/x-wav file.
Note: The Base64 data is valid i can serve it like so [see below] and it works fine:
// assume proper headers sent and "src" = base64 data
res.end("<!DOCTYPE html><html><body><audio src=\"" + src + "\"/></body></html>");
So why can I not serve the binary file, what am I doing wrong?
Two things are wrong.
not Conetnt-Length, it's Content-Length
res.end(binAudio, "binary"); is wrong. Use res.end(binAudio);. With "binary", it expects a string - binary is a deprecated string encoding in node, use no encoding if you already have a buffer.