AWS Lambda corrupts image buffer - Serverless Framework - node.js

I'm having an issue where AWS lambda is corrupting my image buffer when I try and send it to a discord webhook. It works locally with SLS Offline and I can see image in discord channel with no issues, but when I deploy it to AWS I get
instead of the image itself. Looking around at similar people with this issue I've tried adding to my serverless.yml
plugins:
- serverless-apigw-binary
with apigwBinary in custom path.
apigwBinary:
types: #list of mime-types
- 'image/png'
I also saw another post about adding AwsApiGateway under provider in serverless.yml like so
provider:
AwsApiGateway:
binaryMediaTypes:
- 'image/png'
When I console.log('Sending Buffer', buffer); the buffer to make sure it's actually there in cloudwatch I see
Sending Buffer <Buffer fd 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 04 fd 00 00 02.... 162236 more bytes>
So the buffer is definitely making it to the lambda, but it gets corrupted when it makes it to discord. But again it does work locally with no issues. So Lambda is corrupting it somehow. I even tried adding a 3 second delay to see if that might fix it as if it was a race condition or something even though I can see buffer in console log, but nope.
webhook.send({
files: [{
attachment: buffer,
name: newFileName
}]
})
.then((res) => {
console.log('FINISHED', res);
});
}
catch (e) {
console.log('ERROR', e);
}
How can I get the image buffer to make it to discord as an image and not be corrupted. I appreciate any help!

Have you tried setting the Content-Type header of the HTTP response to 'image/png' ?
.then((res) => {
console.log('FINISHED', res);
return {
statusCode: 200,
headers: {
'Content-Type': 'image/png'
},
body: ''
};
})

Related

How to convert base64 encoded string to an audio file or any file in flutter

I have http request coming from my backend node js server which sends me a base64/binary string. I want to convert that into a file (mp3).
This is a portion of the string I am receiving in flutter (not the whole string):
{¸;áÜù|çH?/ÿ?aö§ÌI"ví¯XEÿõÂWû!1î8YÚm¾æàõ¤åù¯F>®*ý&ÿ
This is the request I am sending:
final res = await http
.post(
Uri.parse('*****can't expose the link here sorry*******'),
headers: {"responseType": "arraybuffer"},
body: {'type': 'TTS', 'text': 'Hello from flutter.'},
);
This is how I am processing the request:
if (res.statusCode == 200) {
debugPrint(res.body);
var decoded = base64.decode(res.body);
print('Decoded: $decoded');
// Converting the decoded result to string
print(utf8.decode(decoded));
}
And I receiving this in the console:
. (Same weird characters here)
.
.
I/flutter (23563): dQ¢b²º:KëV<&ùsG(tIPÑI$&e;ÈÑãÛÿÿÿÿÿÎôjuì.ïµJÐ6I
E/flutter (23563): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: FormatException: Invalid character (at character 1)
E/flutter (23563): ÿóDÄ
I am new to file converting in flutter. So can anyone Please help me in this. This is working locally in my node js project (not the server but my computer). But I am not able to understand how I can convert that base64 encoded string (Google documentation says that it is a base64 encoded string) into a audio file in flutter.
How I did it in node js:
await axios
.request({
responseType: "arraybuffer",
url: "******Sorry can't share the link*********",
method: "post",
data: {
type: "TTS",
text: "Hello, I am Sam.",
},
})
.then(async function (response) {
//console.log(response.data);
await fs.writeFileSync(
"D:\\nodejss\\nodejsss\\TTSfile.mp3",
Buffer.from(response.data)
);
});
Also this is the base64 encoded string in node js console:
<Buffer ff f3 44 c4 00 11 d0 05 3c 01 40 18 01 c1 c1 e5 83 f0 11 ff .....(more 00 11 kinda stuff)
Thanks in advance.

Sending image from node.js server to ESP32 using ESP-IDF

I'm trying to send an image from a node.js server to ESP32 chip using WIFI and ESP-IDF. I believe this transfer happens over a TCP/IP connection. I'm able to send a regular text data using a http get request or the http_perform_as_stream_reader function shown below. But when it comes to transferring an image from the node.js server, I'm unable to do so. I seem to be receiving garbage on the client side. The ultimate goal is to save the image into a spiffs file on ESP32. Here's my code:
Server side:
const {imageprocess, convertToBase64} = require('../canvas');
const fs = require('fs');
const express = require('express');
const router = express.Router();
const path = require('path');
const imageFile = path.resolve(__dirname, "../images/file31.png")
router.get('/', funcAllStream)
module.exports = router;
function funcAllStream(req, res, next){
newStream(res, imageFile)
}
function newStream(res, imageFile){
var readStream = fs.createReadStream(imageFile);
readStream.on('data', chunk => {
res.send(chunk.toString('hex'));
})
}
Client side (C++):
static void http_perform_as_stream_reader(void)
{
char *buffer = malloc(MAX_HTTP_RECV_BUFFER + 1);
esp_http_client_config_t config = {
.url = "http://192.168.1.155:8085/api?file=image1.png"
};
esp_http_client_handle_t client = esp_http_client_init(&config); //calls esp_http_client_init from standard ESP32 component esp_http_client
esp_err_t err;
if ((err = esp_http_client_open(client, 0)) != ESP_OK) {
return;
}
int content_length = esp_http_client_fetch_headers(client); //calls esp_http_client_fetch_headers from standard ESP32 component esp_http_client
int total_read_len = 0, read_len;
if (total_read_len < content_length && content_length <= MAX_HTTP_RECV_BUFFER) {//MAX_HTTP_RECV_BUFFER defined to be larger than image1.png file on server side
read_len = esp_http_client_read(client, buffer, content_length); //calls esp_http_client_read from standard ESP32 component esp_http_client
if (read_len <= 0) {
ESP_LOGE(TAG, "Error read data");
}
buffer[read_len] = 0;
}
esp_http_client_close(client);
esp_http_client_cleanup(client);
free(buffer);
}
http_perform_as_stream_reader();
On the client side, I've made sure buffer has been allocated more space than the image file size. When I print out what's been stored in buffer on the client side, I see absolute garbage. Server side sends buffer stream that looks like this:
<Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 00 64 00 00 00 64 08 06 00 00 00 70 e2 95 54 00 00 00 06 62 4b 47 44 00 ff 00 ff 00 ff a0 bd a7 ...>
This is what the client buffer looks like before receiving data:
���?Lv�~sN\
L��8J��s-�z���a�{�;�����\���~�Y���Di�2���]��|^,�x��N�݁�����`2g����n�w��b�Y�^���a���&��wtD�>n#�PQT�(�.z��(9,�?İ�
This is what client buffer looks like after receiving data:
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/octet-stream
Content-Length: 4517
ETag: W/"11a5-IhqwFPYLgC+NRfikTwS2exLtCWQ"
Date: Mon, 19 Apr 2021 16:15:48 GMT
Connection: keep-alive
�PNG
So how do I ensure that the client side buffer actually receives correct data in the correct format from the server side? Ultimately, I want to store the data in a spiffs file on the client side and open the file to display the image transmitted from the node.js server.
Update:
After converting data to hex format on the server side, I can confirm that the client receives the correct hex string. This is what I see on both the server and client side:
89504e470d0a1a0a0000000d4948445200000064000000 ... 0fcc3c0ff03b8c85cc0643044ae0000000049454e44ae426082
Starting and ending signatures are consistent with that of a png file.
I've set the client side buffer at 10000 (content length is 9037 bytes). Still, I receive the hex string in two chunks. In my client side code, function http_perform_as_stream_reader calls esp_http_client_fetch_headers from the ESP component esp_http_client, which in turn calls the lwip_rec_tcp function from lwip/src/api/sockets.c. Since I've set the buffer capacity to 10000 (a rather large amount), esp_http_client_fetch_headers fetches a rather large chunk of the hex string along with the headers. Then when the http_perform_as_stream_reader function calls the esp_http_client_read function, it again calls lwip_rec_tcp which now runs a do ... while loop until it retrieves the remaining server side data. Since lwip_rec_tcp stores all the data in the buffer, which has low storage capacity (certainly not up to the 10000 that I set for it in the code), the first chunk gets overwritten by the final chunk of data. So how do I ensure that the client->response->buffer pointer captures all the chunks of data without modifying lwip_rec_tcp to include uploading data to the spiffs file inside the do ... while loop?
Application should not assume that esp_http_client_read reads same number of bytes specified in the length argument. Instead, application should check the return value of this API which indicates number of bytes read by corresponding call. If the return value is zero, it indicates that complete data is read.
To work correctly, application should call esp_http_client_read in a while loop with a check for return value of esp_http_client_read.
You can also use esp_http_client_read_response, which is a helper API for esp_http_client_read to internally handle the while loop and read complete data in one go.
Please refer http_native_request example which uses esp_http_client_read_response() API.

Node Promise Seems To Be Trickling Back Data, Some Looks Like A Buffer?

I'm making an async https request. I'm getting data back but it seems like it's trickling in rather than coming back as a single response from the server. Some of it even seems like it's coming back as some kind of buffer. Has anyone seen anything like this?
I'm trying to understand if the why the async 'on data' is coming in in pieces. The server is set up to deliver a single response back.
//Start JS File
var port = process.env.PORT || 3000,
http = require('http'),
fs = require('fs'),
https = require('https'),
html = fs.readFileSync('index.html');
const server = http.createServer();
server.on('request', async (req, res) => {
let myPromise = new Promise((resolve, reject) => {
const data = JSON.stringify({
someData:'someData'
});
const options = {
hostname: 'gs.aURL.com',
port: 443,
path: '/openrtb2/auction',
method: 'POST',
headers: {
'Content-Type': 'text/plain',
'Content-Length': data.length,
Accept: '*/*',
pragma: 'no-cache',
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36',
Cookie:
'uids=eyJ0ZW1wVUlEcyI6eyIzM2Fjcm9zcyI6eyJ1aWQiOiIyMTAyNDE4OTIxOTc2MTciLCJleHBpcmVzIjoiMjAyMC0wMy0yN1QyMTo1MToyOC42MDkxNjEyODZaIn0sImFkbnhzIjp7InVpZCI6IjY3ODE0NTEzMjgzOTg1NTE3MDkiLCJleHBpcmVzIjoiMjAyMC0wMy0yN1QyMTo1MTozMy41Mjg0OTgzMjhaIn0sInB1bHNlcG9pbnQiOnsidWlkIjoiOE1WYlRlMTRQSFhVIiwiZXhwaXJlcyI6IjIwMjAtMDMtMjdUMjE6NTE6MzMuMTI5NDI2NjIyWiJ9fSwiYmRheSI6IjIwMjAtMDMtMTNUMjE6NTE6MjguNjA5MTU1MThaIn0=',
origin: 'https://lab.fizz.org'
}
};
const adRequest = https.request(options, (adResponse) => {
//console.log(`statusCode: ${res.statusCode}`);
adResponse.on('data', (d) => {
console.log(typeof d);
res.write(d);
resolve(d);
});
});
adRequest.on('error', (error) => {
reject(error);
});
adRequest.write(data);
adRequest.end();
})
.then((result) => {
console.log(result);
res.write(result);
req.end();
})
.catch((error) => {
//res.write(error);
//req.end();
});
});
// Listen on port 3000, IP defaults to 127.0.0.1
server.listen(port);
// Put a friendly message on the terminal
console.log('Server running at http://127.0.0.1:' + port + '/');
//END JS File
Output to terminal (from console.log(typeof d)):
====================================================================
object
<Buffer 7b 22 69 64 22 3a 22 66 39 34 35 64 34 64 65 2d 33 36 61 37 2d 34 38 65 37 2d 38 64 65 64 2d 35 62 63 63 61 66 32 35 39 37 66 36 22 2c 22 73 65 61 74 ... >
object
object
object
object
object
object
object
object
object
object
object
object
object
object
object
object
object
object
object
data events on streams (that are not in object mode) contain an arbitrary amount of data. You may get all your data from the stream in one data event or it may come in a whole bunch of data events. This is somewhat analagous to reading a TCP stream (since the underlying http protocol is just using a TCP stream) where data arrives in arbitrary chunks.
How many chunks it arrives in has to do with a whole bunch of factors, none of which you control including:
How the sender sends the data
How fast the sender's network connection is
How fast the internet link between you and the sender is
How fast your computer can get data off the network
What might happen to the data as it traverses the internet to get to you
And so on...
So, if you really just want ALL the data, then you need to collect all the data in the data events, combine it together and then in the end event you will know that you now have ALL the data.
If you want meaningful pieces of data along the way, then you have to parse the data as it comes in so that you can find the boundaries of some meaningful amount of data (such as lines or some other object boundary). And, you have to be prepared that you may receive a partial piece of data, have to buffer that partial data until the rest of it comes in on the next data event. This is how you do incremental parsing of incoming streams.
That data event is coming from the incoming stream. Depending upon how the stream is configured, the data event will either offer a Buffer object (which it looks like yours is) or a String.

Proper request template mapping or process in order to upload a photo to s3 using Serverless Framework

I am using Serverless Version 1.0.0 and Node Version 5.7.1
I have an endpoint for updating a photo of a table in a mysql database. So prior to inserting the said photo, i am uploading the formdata i get from the browser to s3, and then update the photo url using the return image url.
The problem is i don't know what is the proper way to define the request mapping template in serverless.yml so that i could extract the photo, AND the path parameters AND the $context variable for the principal id
Here is my current serverless.yml function definition
updateBoardPhoto:
handler: handler.updateBoardPhoto
events:
- http:
path: boards/{boardId}/photo
method: PUT
integration: lambda
parameters:
headers:
Authorization: false
body:
photo: true
paths:
boardId: true
request:
passThrough: WHEN_NO_TEMPLATES
template:
multipart/form-data: '{"principalId" : "$context.authorizer.principalId", "body" : $input.json("$"), "boardId": "$input.params(''boardId'')"'
Now here is the handler i have:
function updateBoardPhoto(event, context, callback) {
var photo = event.body.photo;
var boardId = event.boardId;
var principal = utils.processPrincipalId(event.principalId);
var s3FileName = randomstring.generate(10) + ".jpg";
var s3 = new AWS.S3();
s3.putObject({
Bucket: config.UPLOAD_BUCKET,
Key: s3FileName,
Body: photo,
ACL: 'public-read',
}, function (err, data) {
if (err) {
throw err;
} else {
console.log(data);
context.succeed(data);
}
});
Attempt 1
I tried to use WHEN_NO_TEMPLATES passthrough option, and defined no template but i only get the photo buffer input variable, and no boardId. BUT i successfully upload the photo to s3.
request:
passThrough: WHEN_NO_TEMPLATES
// event variable i get:
{ photo: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 02 1c 00 00 00 ac 08 06 00 00 00 30 ab 75 20 00 00 00 19 74 45 58 74 53 6f 66 74 77 61 72 65 00 ... >,
isOffline: true,
stageVariables: {} }
Attempt 2
Using the ff request definition:
request:
passThrough: WHEN_NO_MATCH
template:
multipart/form-data: '{"principalId" : "$context.authorizer.principalId", "body" : $input.json("$"), "boardId": "$input.params(''boardId'')"'
// event variable i get
{ isOffline: true, stageVariables: {} }
I see no variables in my event variable at all! No photo, nor boardId.
Can anyone tell me what i'm doing wrong? I am using postman to test.
I think you are passing an image as JSON format, so you can use $input.body to access the entire body in the template. Also you should put single single quote around the parameter name instead of double single quote.
Like,
multipart/form-data: '{"principalId" : "$context.authorizer.principalId", "body" : "$input.body", "boardId": "$input.params('boardId')"'
FYI:
API Gateway doesn't support binary data currently. I recommend you to send a base64 encoded string of the image to API Gateway, then you base64 decoded before you put into S3 in you Lambda function.

Accepting a wav audio file over HTTP POST in an Express/Node server

I'm trying to send an audio file audio.wav via cURL to my Express server. I'm using the following cURL request:
curl -X POST --data-binary #"audio.wav" -H "Content-Type: audio/wav" localhost:3000/extract_indicators/audio/darksigma
On my server, I use the following line at the top:
app.use(bodyParser.json());
So that I can by default parse the body of the incoming request as JSON. In my appropriate Express routing handler, I have:
app.post('/extract_indicators/audio/:user_id', function (req, res) {
app.use(bodyParser.raw({ type: 'audio/wav' }));
console.log("RECIEVED AUDIO TO EXTRACT INDICATORS: ", req.body);
<do stuff with audio and send result back>
app.use(bodyParser.json());
});
My call to console.log prints:
RECIEVED AUDIO TO EXTRACT INDICATORS: {}
What am I doing wrong? Why does req.body not contain my data?
Turns out this is fixed by using the following declaration (outside of the app handler):
app.use(bodyParser.json({ limit: '50mb' }));
app.use(bodyParser.raw({ type: 'audio/wav', limit: '50mb' }));
app.post('/extract_indicators/audio/:user_id', function (req, res) {
console.log("RECIEVED AUDIO TO EXTRACT INDICATORS: ", req.body);
<do stuff with audio and send result back>
});
The console output is now:
RECIEVED AUDIO TO EXTRACT INDICATORS: <Buffer 52 49 46 46 54 b0 01 00 57 41 56 45 66 6d 74 20 10 00 00 00 01 00 01 00 80 3e 00 00 00 7d 00 00 02 00 10 00 64 61 74 61 30 b0 01 00 00 00 00 00 00 00 ... >

Resources