Trying out the transloadit api, the template works when I use the testing mode on the transloadit website, but when I try to use it in Node JS with the SDK I'm getting an error:
INVALID_FORM_DATA - https://api2.transloadit.com/assemblies - INVALID_FORM_DATA: The form contained bad data, which cannot be parsed.
The relevant code: (_asset.content) is a Buffer object
async function getThumbnailUrl(_assetkey: string, _asset: I.FormFile): Promise<string> {
let tOptions = {
waitForCompletion: true,
params: {
template_id: process.env.THUMB_TRANSLOADIT_TEMPLATE,
},
};
const stream = new Readable({
read() {
this.push(_asset.content);
this.push(null);
},
});
console.log(_asset.content);
util.transloadit.addStream(_assetkey, stream);
return new Promise((resolve, reject) => {
util.transloadit.createAssembly(tOptions, (err, status) => {
if (err) {
reject(err);
}
console.log(status);
//return status;
resolve(status);
});
});
}
I noticed that you also posted this question on the Transloadit forums - so in the case that anyone else runs into this problem you can find more information on this topic here.
Here's a work-around that the OP found that may be useful:
Just to provide some closure to this topic, I just tested my
workaround (upload to s3, then use import s3 robot to grab the file)
and got it to work with the nodejs sdk so i should be good using that.
I have a suspicion the error I was getting was not to do with the
transloadit api, but rather the form-data library for node js
(https://github.com/form-data/form-data 1) and that’s somehow not
inputting the form data in the way that the transloadit api is
expecting.
But as there aren’t alternatives to that library that I could find, I
wasn’t really able to test that hypothesis.
The Transloadit core team also gave this response regarding the issue:
It may try to set his streams to be Tus streams which would mean that
they’re not uploaded as multipart/form data.
In either case it seems like the error to his callback would be
originating from the error out of _remoteJson
These could be the problem areas
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L146
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L606
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L642
It is also possible that the form-data library could be the source of
the error
To really test this further we’re going to need to try using the
library he was using, make sure the output of it is good, and then
debug the node-sdk to see where the logic failure is in it, or if the
logic failure is on the API side.
Related
I have a remix action function that accepts a file as a formData request object and then uploads it to supabase. After that, I get the URL of the uploaded image and return it.
My function:
const fileExt = filename.split(".").pop();
const fileName = `${Math.random().toFixed(10)}.${fileExt}`;
const filePath = `${fileName}`;
const { error: uploadError } = await supabaseClient.storage
.from("public")
.upload(`misc/${filePath}`, stream);
if (uploadError) {
console.error(uploadError);
throw new Error(uploadError.message);
}
const { publicURL, error } = await supabaseClient.storage
.from("public")
.getPublicUrl(`misc/${filePath}`);
if (error) {
console.error(error);
throw new Error(error.message);
}
!publicURL && console.error(`No public URL for ${filePath}`);
return publicURL;
Because the formData is a multipart/form-data, I need to parse it which I handled by throwing the code above in an uploadHandler function and then:
const formData = await parseMultipartFormData(
request,
uploadHandler
);
The code works and at other times, it fails with an error:ECONNRESET, from what I understand, that may have to do with node asynchronous code but I have not been able to solve it. How would I be able to avoid those random ECONNRESET errors that Supabase keeps giving?
I've been through this with file uploads and Remix. I think while everyone's use-case is different, in many cases, where it's an authenticated user uploading to a service like Supabase storage or Cloudflare, it's better to upload from the client. This is especially true if we are using a serverless env. With cloudflare I grab a unique signed upload url with useFetcher(). With Supabase the team have setup their JS library to authenticate the user and we write policies on the database to protect the upload. So it's just a lot easier to use the Client. This becomes even more relevant where we're uploading large video files for example and we want to use the ability to pause and resume uploading. It's much easier from the client than via isolated serverless functions. If we're worried about data from the client, we can put sensitive data encrypted in a cookie so when the client completes it (eg) sends {completed: true} to an Action which grabs the data from the cookie and persists it to the database
I'm sure this doesn't solve your problem, like you really need to do it via the backend, but I just wanted to share that in my experience it's not always the best idea to do everything The Remix Way. Sometimes the client is better.
I have created and an API endpoint with Firebase Functions usign node.js. This
API endpoint collect JSON data from client browser and I am saving that JSON data to Firebase Firestore databse using Firebase Functions.
While this works fine but when I see Firestore usage tab it's shows really high number of read operations even I have not created any read function till now.
My API is in Production and and current usage data is : Reads 9.7K, Writes 1K, Deletes 0.
I have already tried to check with Firebase Firestore Document and Pricing but never seems to find anything on this issue.
I am using Firestore add function to create document with an auto generated document id. ValidateSubscriberData() is a simple function to validate client req.body inputs which is JSON data.
app.post('/subscribe', (req, res) => {
let subscriber = {};
ValidateSubscriberData(req.body)
.then(data => {
subscriber = data;
//console.log(data);
subscriber.time = Date.now();
return subscriber;
})
.then(subscriber => {
//console.log(subscriber);
// noinspection JSCheckFunctionSignatures
return db.collection(subscriber.host).add(subscriber);
})
.then(document => {
console.log(document.id);
res.json({id: document.id, iid: subscriber.iid});
return 0;
})
.catch(error => {
console.log({SelfError: error});
res.json(error);
});
});
I don't know this is an issue with Firestore or I am doing something in a way that makes read operations internally but I want find a way so I can optimize my code.
English is not my first language and I am trying my best explain my issue.
I think Firestore is working perfectly fine and my code too. I assume Firebase is counting those reads which I made through Firebase Console.
To verify this I have clicked on Data tab on Firestore page and scroll down to make all document name/id visible. And after that I see 1K Reads added on my old stats. So its proven Firestore counting all reads even its from firebase console and it is obvious but my bad I have not thinking about this before.
I don't think this question has any relevance but may be people like me find it helpful before posting any silly question on this helpful platform.
We are trying to migrate our zip microservice from regular application in nodejs Express to AWS API Gateway integrated with AWS Lambda.
Our current application sends request to our API, gets list of attachments and then visits those attachments and pipes their content back to user in form of zip archive. It looks something like this:
module.exports = function requestHandler(req, res) {
//...
//irrelevant code
//...
return getFileList(params, token).then(function(fileList) {
const filename = `attachments_${params.id}`;
res.set('Content-Disposition', `attachment; filename=${filename}.zip`);
streamFiles(fileList, filename).pipe(res); <-- here magic happens
}, function(error) {
errors[error](req, res);
});
};
I have managed to do everything except the part where I have to stream content out of Lambda function.
I think one of possible solutions is to use aws-serverless-express, but I'd like a more elegant solution.
Anyone has any ideas? Is it even possible to stream out of Lambda?
Unfortunately lambda does not support streams as events or return values. (It's hard to find it mentioned explicitly in the documentation, except by noting how invocation and contexts/callbacks are described in the working documentation).
In the case of your example, you will have to await streamFiles and then return the completed result.
(aws-serverless-express would not help here, if you check the code they wait for your pipe to finish before returning: https://github.com/awslabs/aws-serverless-express/blob/master/src/index.js#L68)
n.b. There's a nuance here that a lot of the language SDK's support streaming for requests/responses, however this means connecting to the stream transport, e.g. the stream downloading the complete response from the lambda, not listening to a stream emitted from the lambda.
Had the same issue, now sure how you can do stream/pipe via the native lambda + API Gateway directly... but it's technically possible.
We used Serverless Framework and were able to use XX.pipe(res) using this starter kit (https://github.com/serverless/examples/tree/v3/aws-node-express-dynamodb-api)
What's interesting is that this just wraps over native lambda + API Gateway so, technically it is possible as they have done it.
Good luck
I want to create a google docs sheet within my alexa skill, that is written in Node.js. I have the enabled the google API, I set the required scope in amazon dev portal, I actually can log into the google account (so the first few lines of the posted code seem to work), and I do not get any error messages. But the sheet is never created.
Now the main question would be whether anyone can see the problem in my code.
But I would also have an additional question I would be very interested in: since I use account linking, I can not try that code in the Alexa test simulator, but have to upload it to Alexa before running it, where I can not get any debug messages. How does one best debug in that way?
if (this.event!== undefined)
{
if (this.event.session.user.accessToken === undefined)
{
this.emit(':tellWithLinkAccountCard','to start using this skill, please use the companion app to authenticate on Google');
return;
}
}
else
{
this.emit(':tellWithLinkAccountCard','to start using this skill, please use the companion app to authenticate on Google');
return;
}
var oauth2Client = new google.auth.OAuth2('***.apps.googleusercontent.com', '***', '***');
oauth2Client.setCredentials({
access_token: this.event.session.user.accessToken,
refresh_token: this.event.session.user.refreshToken
});
var services = google.sheets('v4');
services.spreadsheets.create({
resource : {properties:{title:"MySheet"}},
auth : oauth2Client
}, function(err,response) {
if( err ) {
console.log('Error : unable to create file, ' + err);
return;
} else {
console.dir(response);
}
});
Edit: I tried just the lower part manually, and could create a spreadsheet. So the problem seems indeed to be retrieving the access token with "this.event.session.user.accessToken" .
I find it is much easier to debug issues like this using unit tests. This allows rerunning code locally. I use NPM and Mocha and it makes it easier to debug both custom and smart home skills. There is quite a bit of information available online about how to use NPM and Mocha to test Nodejs code, so I won't repeat that here. For example, refer to the Big Nerd Ranch article. It makes it a bit more complex to setup your project initially, but you'll be glad you did every time you hit a bug.
In this example, I would divide the code in half:
The first half would handle the request coming from Alexa and extract the token.
The second half would use the token to create the Google doc. I would also pass the name of the doc to create.
I would test the 2nd part first, passing in a valid token (for testing only) and a test doc name. When that is working, at least you'd know that the doc creation code was working, and any issues would have to be with the token or how you're getting it.
Once that was working, I would then create a test for the first part.
I would us a hardcoded JSON object to pass in as the 'event', with event.session.user.accesToken set to a the working test token used in the first test:
'use strict';
var token = '<valid token obtained from google account>';
let testEvent = {
'session': {
'user': {
'accessToken': token
}
}
}
I have not been able to get an azure function working that uses the node file system module.
I created a brand new function app with most basic HTTP trigger function and included the 'fs' module:
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
context.done();
}
This works fine. I see the full request in live streaming logs, and in the function invocation list.
However, as soon as I add the code which actually uses the file system, it seems to crash the azure function. It neither completes or throws the error. It also doesn't seem to show up in the azure function invocations list which is scary since this is loss of failure information and I might think my service was running fine when there were actually crashes.
var fs = require('fs');
module.exports = function (context, req, res) {
context.log('function triggered');
context.log(req);
fs.writeFile('message.txt', 'Hello Node.js', (err) => {
if (err) throw err;
console.log('It\'s saved!');
context.done();
});
}
The fs.writeFile code taken directly from the node.js website:
https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
I added the context.done() in the callback, but that snippet should work without issue on normal development environment.
This brings up the questions:
Is it possible to use the file system when using Azure Functions?
If so, what are the restrictions?
If no restrictions, are developers required to keep track and perform
cleanup or is this taken care of by some sandboxing?
From my understanding even though this is considered server-less computing there is still a VM / Azure Website App Service underneath which has a file system.
I can use the Kudu console and navigate around and see all the files in /wwwroot and the /home/functions/secrets files.
Imagine a scenario where an azure function is written to write a file with unique name and not perform cleanup it would eventually take up all the disk space on the host VM and degrade performance. This could happen accidentally by a developer and possibly go unnoticed until it's too late.
This makes me wonder if it is by design not to use the file system, or if my function is just written wrong?
Yes you can use the file system, with some restrictions as described here. That page describes some directories you can access like D:\HOME and D:\LOCAL\TEMP. I've modified your code below to write to the temp dir and it works:
var fs = require('fs');
module.exports = function (context, input) {
fs.writeFile('D:/local/Temp/message.txt', input, (err) => {
if (err) {
context.log(err);
throw err;
}
context.log('It\'s saved!');
context.done();
});
}
Your initial code was failing because it was trying to write to D:\Windows\system32 which is not allowed.