I am having issues where when I upload a file to cloudinary it is getting the filename 'new' that I have given like below;
const storage = new CloudinaryStorage({
cloudinary: Cloudinary,
params: {
folder: 'linkedIn',
format: async (req, file) => 'png', // supports promises as well
public_id: (req, file) => `new`,
},
})
The issue is it is replacing any pictures with that name with the most recently uploaded picture.
I'd appreciate any help greatly!
You have a bunch of options Paul.
Firstly, you don't need to specify the public_id at all because Cloudinary can assign one automatically for your image, which is guaranteed to be unique.
That being said, if you'd like to have some control over the name, you can specify the public_id to be the same as the name of the file that you're uploading (potentially useful for SEO), furthermore, you can pass in an option as well which will append that filename with a randomly generated string, guaranteeing uniqueness:
cloudinary.v2.uploader.upload("sample_file.jpg", {
use_filename: true,
unique_filename: false
},
This is also covered in this training course (which you can access for free): https://training.cloudinary.com/courses/cloudinary-fundamentals-for-developers.
You can read more about this here: https://cloudinary.com/documentation/upload_images#public_id.
ps: what is the npm package that you're using?
Each asset that is uploaded to Cloudinary is given a unique identifier in the form of a Public ID. You're using 'new' as a static field on all of your uploads, which causes them to be replaced. Consider the following options:
Using timestamps as the public_id or
don't supply a Public ID in the upload API call, you will receive a randomly assigned Public ID in the response.
For more information, visit the cloudinary documentation : https://cloudinary.com/documentation/upload .
Related
I need to send a email with nodemailer, but in the email i need to attach an pdf that i generate using jspdf, the thing is that i cannot attach an object to an email, i can attach a file getting it's path, a string, and a lot of other things, but an object i cannot.
I tought of saving the pdf and using it's path, but this is all working on an VM, so i dont want to use too much cpu power or ram space.
I also tried using JSON.stringify() in the pdf, but it didn't work, and the file attached to the email was empty.
You can attach your pdf file by using content property of attachments object. It support many formats - string, path to file, buffer, fs read stream, etc.
See this docs.
In case with jspdf you can use output() method
const message = {
// ...
attachments: [
{
filename: "mypdf.pdf",
content: pdfObject.output('arraybuffer')
}
]
};
I am making a get request (using axios) to a pdf hosted at a url endpoint, which is returning an encoded stream as show below:
'%PDF-1.4\n%����\n1 0 obj\n<</Creator (Chromium)\n/Producer (Skia/PDF m78)\n/CreationDate (D:20211115122641+00\'00\')\n/ModDate (D:20211115122641+00\'00\')>>\nendobj\n3 0 obj\n<</ca 1\n/BM /Normal>>\nendobj\n4 0 obj\n<</Type /XObject\n/Subtype /Image\n/Width 1245\n/Height 249\n/ColorSpace /DeviceRGB\n/BitsPerComponent 8\n/Filter /DCTDecode\n/ColorTransform 0\n/Length 35434>> stream\n����\u0000\u0010JFIF\u0000\u0001\u0001\u0000\u0000\u0001\u0000\u0001\u0000\u0000��\u0002(ICC_PROFILE\u0000\u0001\u0001\u0000\u0000\u0002\u0018\u0000\u0000\u0000\u0000\u0002\u0010\u0000\u0000mntrRGB XYZ
I am then uploading this to our storage bucket using the following:
const file = bucket.file(path);
const fileOptions = {contentType: 'application/pdf'};
return file.save(data, fileOptions)
.then(() => {
return {
url: file.getSignedUrl({action: 'read', expires: expiryDate}),
path
}
});
However the URL returned, when opened just displays a blank PDF (containing the correct number of pages) but without any content.
What is the best way for downloading and uploading a PDF from a url into storage? Would it be this approach (though i am not sure what i am missing), or is there a way to upload the URL I have to storage directly?
The code that you are using sets the content type as ‘application/pdf’. However, it is unclear how you are downloading the object from the storage that could have caused this behaviour.
const file = bucket.file(path);
const fileOptions = {contentType: 'application/pdf'};
return file.save(data, fileOptions)
In general,File.createWriteStream() method is used to upload arbitrary data to a file. The details about this can be referred to from the documentation.
Also, you could use Multer to upload files from a function. Multer is a node.js middleware for handling multipart/form-data, which is primarily used for uploading files. Multer adds a body object and a file or files object to the request object. The body object contains the values of the text fields of the form, the file or files object contains the files uploaded via the form. You can also refer to the documentation for more information on Multer.
Screenshot of the error:
I'm trying to store the image inside the uploads folder.
As you guys can see I have configured the multer as well.
But still it is giving the same error.
Can anyone suggest please?
Remove the backslash after uploads in the destination i.e ('/uploads')
Based on your code, i saw that you're using diskStorage setting with destination field.
destination is used to determine within which folder the uploaded files should be stored. This can also be given as a string (e.g. '/tmp/uploads'). If no destination is given, the operating system's default directory for temporary files is used.
Note: You are responsible for creating the directory when providing destination as a function. When passing a string, multer will make sure that the directory is created for you.
You can solve this by putting the following block of code before multer
fs.mkdir('./uploads', { recursive: true }, (err) => {
if (err) throw err;
});
I'm trying to save a remote image file into a database, but I'm having some issues with it since I've never done it before.
I need to download the image and pass it along (with node-request) with a few other properties to another node api that saves it into a mysql database (using sequelize). I've managed to get some data to save, but when I download it manually and try to open it, it's not really usable and no image shows up.
I've tried a few things: getting the image with node-request, converting it to a base64 string (read about that somewhere) and passing it along in a json payload, but that didn't work. Tried sending it as a multipart, but that didn't work either. Haven't worked with streams/buffers/multipart all that much before and never in node. I've tried looking into node-request pipes, but I couldn't really figure out how possibly apply them to this context.
Here's what I currently have (it's a part es6 class so there's no 'function' keywords; also, request is promisified):
function getImageData(imageUrl) {
return request({
url: imageUrl,
encoding: null,
json: false
});
}
function createEntry(entry) {
return getImageData(entry.image)
.then((imageData) => {
entry.image_src = imageData.toString('base64');
var requestObject = {
url: 'http://localhost:3000/api/entry',
method: 'post',
json: false,
formData: entry
};
return request(requestObject);
});
}
I'm almost 100% certain the problem is in this part because the api just takes what it gets and gives it to sequelize to put into the table, but I could be wrong. Image field is set as longblob.
I'm sure it's something simple once I figure it out, but so far I'm stumped.
This is not a direct answer to your question but it is rarely needed to actually store an image in the database. What is usually done is storing an image on storage like S3, a CDN like CloudFront or even just in a file system of a static file server, and then storing only the file name or some ID of the image in the actual database.
If there is any chance that you are going to serve those images to some clients then serving them from the database instead of a CDN or file system will be very inefficient. If you're not going to serve those images then there is still very little reason to actually put them in the database. It's not like you're going to query the database for specific contents of the image or sort the results on the particular serialization of an image format that you use.
The simplest thing you can do is save the images with a unique filename (either a random string, UUID or a key from your database) and keep the ID or filename in the database with other data that you need. If you need to serve it efficiently then consider using S3 or some CDN for that.
I have a chrome extension that saves a bunch of data to chrome.storage.local. I'm trying to find easy ways to export this data and package it into a file. I'm not constrained on what type of file it is (JSON, CSV, whatever), I just need to be able to export the contents into a standalone (and send-able) file. The extension is only run locally and the user would have access to all local files.
First, you need to get all data.
Then serialize the result.
Finally, offer it as a download to the user.
chrome.storage.local.get(null, function(items) { // null implies all items
// Convert object to a string.
var result = JSON.stringify(items);
// Save as file
var url = 'data:application/json;base64,' + btoa(result);
chrome.downloads.download({
url: url,
filename: 'filename_of_exported_file.json'
});
});
To use the chrome.downloads.download method, you need to declare the "downloads" permission in addition to the storage permission in the manifest file.
You should look here: https://groups.google.com/a/chromium.org/forum/#!topic/chromium-extensions/AzO_taH2b7U
It shows exporting chrome local storage to JSON.
Hope it helps