I have a remix action function that accepts a file as a formData request object and then uploads it to supabase. After that, I get the URL of the uploaded image and return it.
My function:
const fileExt = filename.split(".").pop();
const fileName = `${Math.random().toFixed(10)}.${fileExt}`;
const filePath = `${fileName}`;
const { error: uploadError } = await supabaseClient.storage
.from("public")
.upload(`misc/${filePath}`, stream);
if (uploadError) {
console.error(uploadError);
throw new Error(uploadError.message);
}
const { publicURL, error } = await supabaseClient.storage
.from("public")
.getPublicUrl(`misc/${filePath}`);
if (error) {
console.error(error);
throw new Error(error.message);
}
!publicURL && console.error(`No public URL for ${filePath}`);
return publicURL;
Because the formData is a multipart/form-data, I need to parse it which I handled by throwing the code above in an uploadHandler function and then:
const formData = await parseMultipartFormData(
request,
uploadHandler
);
The code works and at other times, it fails with an error:ECONNRESET, from what I understand, that may have to do with node asynchronous code but I have not been able to solve it. How would I be able to avoid those random ECONNRESET errors that Supabase keeps giving?
I've been through this with file uploads and Remix. I think while everyone's use-case is different, in many cases, where it's an authenticated user uploading to a service like Supabase storage or Cloudflare, it's better to upload from the client. This is especially true if we are using a serverless env. With cloudflare I grab a unique signed upload url with useFetcher(). With Supabase the team have setup their JS library to authenticate the user and we write policies on the database to protect the upload. So it's just a lot easier to use the Client. This becomes even more relevant where we're uploading large video files for example and we want to use the ability to pause and resume uploading. It's much easier from the client than via isolated serverless functions. If we're worried about data from the client, we can put sensitive data encrypted in a cookie so when the client completes it (eg) sends {completed: true} to an Action which grabs the data from the cookie and persists it to the database
I'm sure this doesn't solve your problem, like you really need to do it via the backend, but I just wanted to share that in my experience it's not always the best idea to do everything The Remix Way. Sometimes the client is better.
Related
Trying out the transloadit api, the template works when I use the testing mode on the transloadit website, but when I try to use it in Node JS with the SDK I'm getting an error:
INVALID_FORM_DATA - https://api2.transloadit.com/assemblies - INVALID_FORM_DATA: The form contained bad data, which cannot be parsed.
The relevant code: (_asset.content) is a Buffer object
async function getThumbnailUrl(_assetkey: string, _asset: I.FormFile): Promise<string> {
let tOptions = {
waitForCompletion: true,
params: {
template_id: process.env.THUMB_TRANSLOADIT_TEMPLATE,
},
};
const stream = new Readable({
read() {
this.push(_asset.content);
this.push(null);
},
});
console.log(_asset.content);
util.transloadit.addStream(_assetkey, stream);
return new Promise((resolve, reject) => {
util.transloadit.createAssembly(tOptions, (err, status) => {
if (err) {
reject(err);
}
console.log(status);
//return status;
resolve(status);
});
});
}
I noticed that you also posted this question on the Transloadit forums - so in the case that anyone else runs into this problem you can find more information on this topic here.
Here's a work-around that the OP found that may be useful:
Just to provide some closure to this topic, I just tested my
workaround (upload to s3, then use import s3 robot to grab the file)
and got it to work with the nodejs sdk so i should be good using that.
I have a suspicion the error I was getting was not to do with the
transloadit api, but rather the form-data library for node js
(https://github.com/form-data/form-data 1) and that’s somehow not
inputting the form data in the way that the transloadit api is
expecting.
But as there aren’t alternatives to that library that I could find, I
wasn’t really able to test that hypothesis.
The Transloadit core team also gave this response regarding the issue:
It may try to set his streams to be Tus streams which would mean that
they’re not uploaded as multipart/form data.
In either case it seems like the error to his callback would be
originating from the error out of _remoteJson
These could be the problem areas
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L146
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L606
https://github.com/transloadit/node-sdk/blob/master/src/TransloaditClient.js#L642
It is also possible that the form-data library could be the source of
the error
To really test this further we’re going to need to try using the
library he was using, make sure the output of it is good, and then
debug the node-sdk to see where the logic failure is in it, or if the
logic failure is on the API side.
I make a simple audio creating web app using Node.js server. I would like to create audio using Cloud text to speech API and then upload that audio to Cloud storage.
(I use Windows 10, Windows Subsystems for Linux, Debian 10.3 and Google Chrome browser. )
This is the code in Node.js server.
const client = new textToSpeech.TextToSpeechClient();
async function quickStart() {
// The text to synthesize
const text = 'hello, world!';
// Construct the request
const request = {
input: {text: text},
// Select the language and SSML voice gender (optional)
voice: {languageCode: 'en-US', ssmlGender: 'NEUTRAL'},
// select the type of audio encoding
audioConfig: {audioEncoding: 'MP3'},
};
// Performs the text-to-speech request
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
console.log(response);
I would like to upload response to Cloud Storage.
Can I upload response to Cloud Storage directly? Or Do I have to save response in Node.js server and upload it to Cloud Storage?
I searched the Internet, but couldn't find the way to upload response to Cloud Storage directly. So, if you have a hint, please tell me. Thank you in advance.
You should be able to do that, with all your code in the same file. The best way for you to achieve that, it's by using a Cloud Function, that will be the one sending the file to your Cloud Storage. But, yes, you will need to save your file using Node.js, so then, you will upload it to Clou Storage.
To achieve that, you will need to save your file locally and then, upload it to Cloud Storage. As you can check in a complete tutorial in this other post here, you need to construct the file, save it locally and then, upload it. Below code is the main part you will need to add in your code.
...
const options = { // construct the file to write
metadata: {
contentType: 'audio/mpeg',
metadata: {
source: 'Google Text-to-Speech'
}
}
};
// copied from https://cloud.google.com/text-to-speech/docs/quickstart-client-libraries#client-libraries-usage-nodejs
const [response] = await client.synthesizeSpeech(request);
// Write the binary audio content to a local file
// response.audioContent is the downloaded file
return await file.save(response.audioContent, options)
.then(() => {
console.log("File written to Firebase Storage.")
return;
})
.catch((error) => {
console.error(error);
});
...
Once you have this part implemented, you will have the file that is saved locally downloaded and ready to be uploaded. I would recommend you to take a better look at the other post I mentioned, in case you have more doubts on how to achieve it.
Let me know if the information helped you!
I have a Nodejs server running with Hapi.
one of the job of the server is to send files to servicer API (the API only accept streams when I send buffer it return an error) on the user ask
All the files are stored in s3.
When I download them if I'm using promise(),
I get in the body buffer.
And I can get passthrough if I'm using createReadStream().
My problem is when I try to convert the buffer to stream and send it the API reject it, and the same when I use the createReadStream() result,
but when I use FS to save the file and then FS to read the API accept the stream and its work.
so I need help how can I create the same result without saving and reading the file.
edit:
here is my code I know it's the wrong way but it works I need a better way that will work
static async downloadFile(Bucket, Key) {
const result = await s3Client
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`${Path.basename(Key)}`,result.Body);
const file = await fs.createReadStream(`${Path.basename(Key)}`);
return file;
}
If I understand it correctly, you want to get the object from the s3 bucket and stream to your HTTP response as the stream.
Instead of getting the data in the buffers and than figuring out the way to convert it to stream can be complicated and has its limitations, if you really want to leverage the power of streams then don't try to convert it to buffer and load the entire object to the memory, you can create a request that streams the returned data directly to a Node.js Stream object by calling the createReadStream method on the request.
Calling createReadStream returns the raw HTTP stream managed by the request. The raw data stream can then be piped into any Node.js Stream object.
This technique is useful for service calls that return raw data in their payload, such as calling getObject on an Amazon S3 service object to stream data directly into a file, as shown in this example.
//I Imagine you have something similar.
server.get ('/image', (req, res) => {
let s3 = new AWS.S3({apiVersion: '2006-03-01'});
let params = {Bucket: 'myBucket', Key: 'myImageFile.jpg'};
let readStream= s3.getObject(params).createReadStream();
// When the stream is done being read, end the response
readStream.on('close', () => {
res.end()
})
readStream.pipe(res);
});
When you stream data from a request using createReadStream, only the raw HTTP data is returned. The SDK does not post-process the data, this raw HTTP data can be directly returned.
Note:
Because Node.js is unable to rewind most streams, if the request initially succeeds, then retry logic is disabled for the rest of the response. In the event of a socket failure, while streaming, the SDK won't attempt to retry or send more data to the stream. Your application logic needs to identify such streaming failures and handle them.
Edits:
After the edits on the original question, I can see that s3 sends a PassThrough stream object which is different from a FileStream in Nodejs. So to get around the problem use the memory (If your files are not very big and or you have enough memory).
Use the package memfs, it will replace the native fs in your app
https://www.npmjs.com/package/memfs
Install the package by npm install memfs and require as follows:
const {fs} = require('memfs');
and your code will look like
static async downloadFile(Bucket, Key) {
const result = await s3
.getObject({
Bucket,
Key
})
.promise();
fs.writeFileSync(`/${Key}`,result.Body);
const file = await fs.createReadStream(`/${Key}`);
return file;
}
Note that the only change I have made in your functions is that I have changed the path ${Path.basename(Key)} to /${Key}, because now you don't need to know the path of your original filesystem we are storing files in memory. I have tested and this solution works
Our system needs to use out internal security checks when interacting with dropbox, we can therefore not use the clientside SDK for Dropbox.
We would rather upload to our own endpoint, apply security checks, and then stream the incoming request to dropbox.
I am coming up short here as there was an older NodeJS Dropbox SDK which supported pipes, but the new SDK does not.
Old SDK:
https://www.npmjs.com/package/dropbox-node
We want to take the incoming upload request and forward it to dropbox as it comes in. (and thus prevent the upload from taking twice as long if we first upload the entire thing to our server and then upload to dropbox)
Is there any way to solve this?
My Dropbox NPM module (dropbox-v2-api) supports streaming. It's based on HTTP API, so you can take an advantage of streams. Example? I see it this way:
const contentStream = fs.createReadStream('file.txt');
const securityChecks = ... //your security checks
const uploadStream = dropbox({
resource: 'files/upload',
parameters: { path: '/target/file/path' }
}, (err, result, response) => {
//upload finished
});
contentStream
.pipe(securityChecks)
.pipe(uploadStream);
Full stream support example here.
I'm developing an app using NGinx + Node.js + Express + Firebase that simply takes input from a mobile app and stores it to Firebase, optionally uploading files to S3.
In its simplest terms, the "create" function does this
Validates input
Formats the input Checks if there is a file uploaded
(via the multer plugin) and stores it
If there was a file, upload
to Amazon S3 and delete the source file (it's important to note I was
encountering this issue before the inclusion of S3).
Create the item
by pushing into the items reference on Firebase
Create the item for
the user by pushing into the user_items reference on Firebase.
There are a few other functions that I have implemented as an API.
My trouble is coming from an intermittent spike in CPU usage, which is causing the nginx server to report a gateway timeout from the Node.js application.
Sometimes the server will fall over when performing authentication against a MongoDB instance, other times it will fall over when I'm recieving the input from the Mobile app. There doesn't seem to be any consistency between when it falls over. Sometimes it works fine for 15+ various requests (upload/login/list, etc), but sometimes it will fall over after just one request.
I have added error checking in the form of:
process.on('uncaughtException', function(err) {
console.error(err.stack);
});
Which will throw errors if I mistype a variable for example, but when the server crashes there are no exceptions thrown. Similarly checking my logs shows me nothing. I've tried profiling the application but the output doesn't make any sense at all to me. It doesn't point to a function or plugin in particular.
I appreciate this is a long winded problem but I'd really appreciate it if you could point me in a direction for debugging this issue, it's causing me such a headache!
This may be a bug in the Firebase library. What version are you using?
I've been having a very similar issue that has had me frustrated for days. Node.js + Express + Firebase on Heroku. Process will run for a seemingly random time then I start getting timeout errors from Heroku without the process ever actually crashing or showing an error. Higher load doesn't seem to make it happen sooner.
I just updated from Firebase 1.0.14 to latest 1.0.19 and I think it may have fixed the problem for me. Process has been up for 2 hours now where it would only last for 5-30 min previously. More testing to do, but thought I'd share my in-progress results in case they were helpful.
It seems the answer was to do with the fact that my Express app was reusing one Firebase connection for every request, and for some reason this was causing the server to lock up.
My solution was to create some basic middleware that provides a new reference to the Firebase on each API request, see below:
var Middleware = {
/*
* Initialise Firebase Refs per connection
*/
initFireBase: function(req, res, next) {
console.log('Intialising Firebase for user');
// We need a authToken
var authToken = req.param('authToken');
// Validate the auth token
if(!authToken || authToken.length === 0) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'You must supply an authToken to this method.'});
}
else {
// Attempt to parse the auth token
try {
var decodedToken = JWTSimple.decode(authToken, serverToken);
}
catch(e) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'Supplied token was not recognised.'});
}
// Bail out if the token is invalid
if(!decodedToken) {
return res.send(500, {code: 'INVALID_TOKEN', message: 'Supplied token was not recognised.'});
}
// Otherwise send the decoded token with the request
else {
req.auth = decodedToken.d;
}
}
// Create a root reference
var rootRef = new Firebase('my firebase url');
// Apply the references to each request
req.refs = {
root: rootRef,
user: rootRef.child('users'),
inbox: rootRef.child('inbox')
};
// Carry on to the calling function
next();
}
};
I then simply call this middleware on my routes:
/*
* Create a post
*/
router.all('/createPost', Middleware.initFireBase, function(req, res) {
var refs = req.refs;
refs.inbox.push({}) // etc
....
This middleware will soon be extended to provide Firebase.auth() on the connection to ensure that any API call made with a valid authToken would be signed to the user on Firebase's side. However for development this is acceptable.
Hopefully this helps someone.