Access uploaded image in Sails.js backend project - node.js

I am trying to do an upload and then accessing the image. The upload is going well, uploading the image to assets/images, but when I try to access the image from the browser like http://localhost:1337/images/image-name.jpg it gives me 404. I use Sails.js only for backend purposes - for API and the project is created with --no-front-end option. My front end is on AngularJS.
My upload function:
avatarUpload: function(req, res) {
req.file('avatar').upload({
// don't allow the total upload size to exceed ~10MB
maxBytes: 10000000,
dirname: '../../assets/images'
}, function whenDone(err, uploadedFiles) {
console.log(uploadedFiles);
if (err) {
return res.negotiate(err);
}
// If no files were uploaded, respond with an error.
if (uploadedFiles.length === 0) {
return res.badRequest('No file was uploaded');
}
// Save the "fd" and the url where the avatar for a user can be accessed
User
.update(req.userId, {
// Generate a unique URL where the avatar can be downloaded.
avatarUrl: require('util').format('%s/user/avatar/%s', sails.getBaseUrl(), req.userId),
// Grab the first file and use it's `fd` (file descriptor)
avatarFd: uploadedFiles[0].fd
})
.exec(function (err){
if (err) return res.negotiate(err);
return res.ok();
});
});
}
I see the image in the assets/images folder - something like this - 54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg
http://localhost:1337/assets/images/54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg - gives 404
http://localhost:1337/images/54cd1fc5-89e8-477d-84e4-dd5fd048abc0.jpg - gives 404

This happens because the resources your application accesses are not accessed directly from the assets directory but the .tmp directory in the project root.
The assets are copied to the .tmp directory when sails is lifted, so anything added after the lift isn't present in .tmp.
What I usually do is upload to .tmp and copy the file to assets on completion. This way assets isn't polluted in case the upload fails for any reason.
Let us know if this works. Good luck!
Update
Found a relevant link for this.

Related

Cancel File Upload: Multer, MongoDB

I can't seem to find any up-to-date answers on how to cancel a file upload using Mongo, NodeJS & Angular. I've only come across some tuttorials on how to delete a file but that is NOT what I am looking for. I want to be able to cancel the file uploading process by clicking a button on my front-end.
I am storing my files directly to the MongoDB in chuncks using the Mongoose, Multer & GridFSBucket packages. I know that I can stop a file's uploading process on the front-end by unsubscribing from the subsribable responsible for the upload in the front-end, but the upload process keeps going in the back-end when I unsubscribe** (Yes, I have double and triple checked. All the chunks keep getting uploaded untill the file is fully uploaded.)
Here is my Angular code:
ngOnInit(): void {
// Upload the file.
this.sub = this.mediaService.addFile(this.formData).subscribe((event: HttpEvent<any>) => {
console.log(event);
switch (event.type) {
case HttpEventType.Sent:
console.log('Request has been made!');
break;
case HttpEventType.ResponseHeader:
console.log('Response header has been received!');
break;
case HttpEventType.UploadProgress:
// Update the upload progress!
this.progress = Math.round(event.loaded / event.total * 100);
console.log(`Uploading! ${this.progress}%`);
break;
case HttpEventType.Response:
console.log('File successfully uploaded!', event.body);
this.body = 'File successfully uploaded!';
}
},
err => {
this.progress = 0;
this.body = 'Could not upload the file!';
});
}
**CANCEL THE UPLOAD**
cancel() {
// Unsubscribe from the upload method.
this.sub.unsubscribe();
}
Here is my NodeJS (Express) code:
...
// Configure a strategy for uploading files.
const multerUpload = multer({
// Set the storage strategy.
storage: storage,
// Set the size limits for uploading a file to 120MB.
limits: 1024 * 1024 * 120,
// Set the file filter.
fileFilter: fileFilter
});
// Add new media to the database.
router.post('/add', [multerUpload.single('file')], async (req, res)=>{
return res.status(200).send();
});
What is the right way to cancel the upload without leaving any chuncks in the database?
So I have been trying to get to the bottom of this for 2 days now and I believe I have found a satisfying solution:
First, in order to cancel the file upload and delete any chunks that have already been uploaded to MongoDB, you need to adjust the fileFilter in your multer configuration in such a way to detect if the request has been aborted and the upload stream has ended. Then reject the upload by throwing an error using fileFilter's callback:
// Adjust what files can be stored.
const fileFilter = function(req, file, callback){
console.log('The file being filtered', file)
req.on('aborted', () => {
file.stream.on('end', () => {
console.log('Cancel the upload')
callback(new Error('Cancel.'), false);
});
file.stream.emit('end');
})
}
NOTE THAT: When canceling a file upload, you must wait for the changes to show up on your database. The chunks that have already been sent to the database will first have to be uploaded before the canceled file gets deleted from the database. This might take a while depending on your internet speed and the bytes that were sent before canceling the upload.
Finally, you might want to set up a route in your backend to delete any chunks from files that have not been fully uploaded to the database (due to some error that might have occured during the upload). In order to do that you'll need to fetch the all file IDs from your .chunks collection (by following the method specified on this link) and separate the IDs of the files whose chunks have been partially uploaded to the database from the IDs of the files that have been fully uploaded. Then you'll need to call GridFSBucket's delete() method on those IDs in order to get rid of the redundant chunks. This step is purely optional and for database maintenance reasons.
Try using try catch way.
There can be two ways it can be done.
By calling an api which takes the file that is currently been uploaded as it's parameter and then on backend do the steps of delete and clear the chunks that are present on the server
By handling in exception.
By sending a file size as a validation where if the backend api has received the file totally of it size then it is to be kept OR if the size of the received file is less that is due to cancellation of upload bin between then do the clearance steps where you just take the id and mongoose db of the files chuck and clear it.

Saving image in local file system using node.js

I was working on a simple code to download missing images from a site and save it in the local system, given its complete URL. I am able to get the data in binary format as response and also I am able to save it properly. But when I try to open the image it shows the format is not supported by the system. I tried to save some js and css file and they are being saved properly and I am able to view them as well. But I am having problem with all the image formats.
Here is the code I wrote:
try {
response = await axios.get(domain + pathOfFile);
console.log(response);
fs.writeFile(localBasePath + pathOfFile, response.data, "binary", (error) => {
if (error) console.log("error while writting file", error.message);
});
} catch (error) {
console.log("error in getting response", error.message);
}
domian: contains the base domain of the site
pathOfFile: contains the path of file on that domain
localBasePath: the base folder where I need to store the image
I even tried to store the response in a buffer and then tried to save the image, but still I am facing the same problem.
Any suggestions would be appreciated.
You need to define responseEncoding while calling axios.get method.
Change your line to:
response = await axios.get(domain + pathOfFile, {responseEncoding: "binary"});

Upload file onto AWS S3 with specific path in NodeJS

I've been taking a crack at uploading files onto S3 via NodeJS, but with a specific path where they have to be stored.
return s3fsImpl.writeFile(file_name.originalFilename,stream).then(function() {
fs.unlink(file_name.path, function(err) {
if (err) {
console.error(err);
} else { /** sucessess **/ }
I'm not sure how do I give a path like /project_name/file_name.
I have been following this tutorial
In this scenario your are using a stream as the target. When you created that stream you should have specified the path at that point.

Concatenate express route parameter and file extension

In my Express app I create snapshots whose details I store in MongoDB. The actual snapshot files are stored in the snapshots folder under their _id, eg /snapshots/575fe038a84ca8e42f2372da.png.
These snapshots can currently be loaded by a user by navigating to the folder and id, in their browser, i.e. /snapshots/575fe038a84ca8e42f2372da, which returns the image file. However I think to be more intuitive the url path should include the file extension; i.e. the user should have to put in /snapshots/575fe038a84ca8e42f2372da .PNG to get the file.
This is what I have currently:
router.get('/:shotID', function(req, res, next) {
// Checks if shot exists in DB
Shots.findOne({
_id: req.params.shotID // More conditions might get put here, e.g. user restrictions
}, (err, result) => {
if (err) {
res.status(404).res.end();
return;
}
var file = fs.createReadStream(`./snapshots/${req.params.shotID}.png`);
file.pipe(res);
});
});
How can I incorporate the user putting in the file extension in this path?
You can provide a custom regular expression to match a named parameter, which can also contain a file extension:
router.get('/:shotID(?:([a-fA-F0-9]{24})\.png$)', ...);
For the URL path /snapshots/575fe038a84ca8e42f2372da.png, req.params.shotID will be 575fe038a84ca8e42f2372da.
If you want to match both with and without .png, you can use this:
router.get('/:shotID(?:([a-f0-9]{24})(?:\.png)?$)', ...);

Unable to upload an image file to PUBLISHED node.acs application

I am trying to build a front end to my ACS ( Appcelerator Cloud Service) database. As a part of the admin front end, users will upload images and I am using Photos object to save them. I am using following code to upload the photos to cloud db and it works very well on my local system/PC.
var data = {
session_id:req.session.session_id,
photo: req.files.photo_file
};
data['photo_sizes[medium_500]'] = '500x333';
data['photo_sync_sizes[]'] = 'medium_500';
ACS.Photos.create(data, function(e) {
if(e.success && e.success === true){
// Update custom object with this photo
ACS.Objects.update({
session_id:req.session.session_id,
classname:objname,
id:objid,
fields: {
photo_id:e.photos[0].id,
photo_url:e.photos[0].urls.medium_500
}
},function(data) {
if(data.success) {
// console.log('Updated successfully:' + JSON.stringify(data));
res.send(data);
}else {
console.log('Error:\n' +
((data.error && data.message) || JSON.stringify(data)));
}
}
);
//res.send(data);
}else{
logger.debug('Error: ' + JSON.stringify(e));
req.session.flash = {msg:e.message, r:0};
res.redirect('/');
}
});
What's happening here is, a mutipart HTML form is uploading the file. That file is read on server and passed to the ACS.Photos.create call. However, when I publish the app to the cloud, it gives following error and application crashes.
[ERROR] [1233] Error: EACCES, open '/tmp/292fb15dcab44f58a315515bd9e70a8a'
Looking at the error it's clear that, server is not able to access the /tmp directory.
Node.acs is built on top of Node.js, I saw several node.js examples using this approach. How this issue is handled when the application/website is published or goes live on a web server?
Thanks,
Niranjan
Looks like there was indeed some file permission issue. Take a look at this post on the node.acs group.
https://groups.google.com/forum/#!topic/node-acs/XrRxBTtwiO4
The problem is now SOLVED !

Resources