Serve static files from google-cloud-storage through express middleware - node.js

I have an express app hosted on google AppEngine which uses the express static middleware. I'd like to store the static files on google-cloud-storage, and to be able to switch from regular filesystem to google-cloud-storage without too much modification.
I was thinking of writing a middleware:
using the Google Cloud client library for Node.js, something like Express caching Image stream from Google Cloud Storage) ;
or acting as a proxy (mapping pathnames to raw google-cloud-storage urls).
Is there an easier/cleaner way to do that ?

I made this work using http-proxy-middleware. Essentially, since GCS files can be accessed via http protocol, this is all we need.
Ideally, the files could be served directly out of GCS itself, by making the bucket public and making its URL like https://storage.googleapis.com/<bucket-name>/file. But my requirement was the file needed to be served from the same domain as my app, but the files were not part of the app itself (they are generated separately). So, I had to implement it as a proxy.
import proxy from 'http-proxy-middleware';
...
app.use('/public', proxy({
target: `https://storage.googleapis.com/${process.env.GOOGLE_CLOUD_PROJECT}.appspot.com`,
changeOrigin: true,
}));
Note that the bucket based on the project ID is automatically created by GAE, but it needs to be given public access. This can be done by
gsutil defacl set public-read gs://${GOOGLE_CLOUD_PROJECT}.appspot.com
After setting up the proxy, all requests to https://example.com/public/* will served from the bucket <GOOGLE_CLOUD_PROJECT>.appspot.com/public/*.

I require the same and came across the example used on google cloud node or see example in another question
Pipe the readstream of the file-contents to your response.
Set the file-name and content-type
// Add headers to describe file
let headers = {
'Content-disposition': 'attachment; filename="' + 'giraffe.jpg' + '"',
'Content-Type': 'image/png'
};
// Streams are supported for reading files.
let remoteReadStream = bucket.file('giraffe.jpg').createReadStream();
// Set the response code & headers and pipe content to response
res.status(200).set(headers);
remoteReadStream.pipe(res);

You could configure a GCS bucket to host a static website and then use an existing express middleware to proxy requests to that bucket.

You might also be able to use an s3 express middleware like s3-proxy. By following the 'simple' migration steps to move an s3 client application to google cloud storage you should be able to derive the necessary config parameters for the middleware. The key step will be generating some 'access' and 'secret' developer keys.

Related

use fastify to download big files as multipart

I have a code that download big objects from S3 using multipart download. I am working now on a microservice that will hide all the s3 operations and in the future will give me the flexibility to change to any object store.I have created a nodejs service using fastify, how can I add the support of multipart download using fastify?
You should rewrite the parts logic of AWS-S3 service in server and client side too.
The GetObject accepts the Range
header that will limit the download of that piece of file.
So the client need to know how many pieces compose a file, using the ListParts
API usually. Then it can call the GetObject
with the range parameter:
var params = {
Bucket: "examplebucket",
Key: "SampleFile.txt",
Range: "bytes=0-9"
};
s3.getObject(params, function(err, data) {...
So your Fastify server should proxy at latest those 2 services to let the client download
simultaneously many pieces of the file and then merge them.

Serve remote url with node.js stream

I have a video stored in amazon s3.
Now I'm serving it to the client with node.js stream
return request(content.url).pipe(res)
But, the following format is not working with safari.
Safari is unable to play streamed data. But, the same code works for chrome and firefox.
I did some research and found out
chrome's request content-range param looks like
[0-]
But, safari does the same with content ranges
[0-10][11-20][21-30]
Now if the content was stored in my server, I could have break the file in chucks with
fs.createReadStream(path).pipe(res)
to serve safari with it's requested content range
As mentioned in this blog https://medium.com/better-programming/video-stream-with-node-js-and-html5-320b3191a6b6
How can I do the same with remote url stored in s3?
FYI, It's not feasible to download the content temporarily on server and delete it after serving. As, the website is supposed to receive good traffic.
How can I do the same with remote url stored in s3?
Don't.
Let S3 serve the data. Sign a URL to temporarily allow access to the client. Then, you don't have to serve or proxy anything and you save a lot of bandwidth. An example from the documentation:
var params = {Bucket: 'bucket', Key: 'key'};
var url = s3.getSignedUrl('getObject', params);
console.log('The URL is', url);
...As, the website is supposed to receive good traffic.
You'll probably also want to use a CDN to further reduce your costs and enhance the performance. If you're already using AWS, CloudFront is a good choice.

Should I use express static dirname or use Node.js as a remote server?

My Node.js folders hirarchy looks like the next image:
My folders hirarchy
While app.js it's the Node.js main file, routes it's the Node.js routes and src it's the client public html files.
This is the code in app.js:
var express = require('express');
var app = express();
var server = require('http').createServer(app);
global.io = require('socket.io').listen(server);
var compression = require('compression');
var helmet = require('helmet');
var session = require('express-session');
var bodyParser = require('body-parser');
app.use(bodyParser.json()); // support json encoded bodies
app.use(bodyParser.urlencoded({ extended: true })); // support encoded bodies
app.use(express.static(__dirname + '/src'));
app.use(helmet());
app.use(compression());
app.use('/rides', require('./routes/ridesServer'));
app.use('/user', require('./routes/userServer'));
app.use('/offers', require('./routes/offersServer'));
app.use('/notifications', require('./routes/notificationsServer'));
server.listen("8080", function() {
console.log("Connected to db and listening on port 8080");
});
This is another API in routes/userServer.js file:
router.post('/verifytoken', function(req, res, next) {
// some functions here
});
And this another HTTP REQUEST I am doing from client side, in page: ride.js:
$.ajax({
method: "POST",
headers: {
"Content-Type": "application/json"
},
url: "user/verifytoken",
data: JSON.stringify(something),
success: function(response) {
// some code
},
error: function(error) {
// some code
}
});
As you can see, client files and Node.js server files are on the same server, and Node.js serves those static files via this command:
app.use(express.static(__dirname + '/src'));
I think, that it should be avoided, and there is a better way!
If you are a Node.js expert and familier with best practices, please, tell me if the next method of working is correct, if it does not, please correct me:
I thought about putting static files on public_html directory
and Node.js files in server directory which is under public_html directory.
Then run pm2 start app.js --watch or, node app.js on the app.js which is located in server directory, and not in public_html.
In result, index.html file will be served as just as another static file without any relation to Node.js server, and Node.js will be in its own folder, not dealing with any kind of client the side.
In other words, seperate Node.js and static files and put Node.js files as a sub directory and not main directory.
Then the HTTP REQUEST will be looking like this:
$.ajax({
method: "POST",
headers: {
"Content-Type": "application/json"
},
url: "server/user/verifytoken",
data: JSON.stringify(something),
success: function(response) {
// some code
},
error: function(error) {
// some code
}
});
Please note that I have added SERVER directory.
Furthermore, I can exchange the
url: "server/user/verifytoken",
to an IP from a remote app (like Ionic):
url: "123.123.123.123:443/server/user/verifytoken",
And then my HTTP REQUESTS will be served via HTTPS (because I am sending for port 443), I can create multiple apps on the same server and I have no struggles with any Node.js express static folders.
What do you think?
Thanks!
First let me say I'm not an expert. But I have 3 years of continuous development of Node.js based solutions.
In the past I have created solutions mixing client side code and server side code on the same project and it has work. At least for a while. But in the long run is a bad idea for many possible reasons. Some of them are:
Client side code and server side code may require different processes to produce working code. For example client side code may require trans compiling from ES6 to more compatible ES5 using something as gulp or Webpack. This is normally not the case for server side code because the runtime is more targeted.
Mixing client side code and an API server may prevent you from horizontally scaling one of them without the other.
This is like a mono repo. And having a mono repo without a CI process tailor for this scenario may produce very long development times.
What we currently do at my work is as follow:
Create a separated API server project. This way you can concentrate on developing a good API while working on this specific project. Let cross-cutting concerns (like authentication) outside the API server.
Create a separated project for your client side code (SPA maybe). Set your dev environment to proxy API requests to a running API server (may be running locally).
Create a separated project for the deployment of the entire solution. This project will put together the serving of the client code, proxying requests to the API and implementing cross-cutting concerns like authentication, etc.
Having your code separated in this way makes easy developing each pieces and fast evolution. But it may introduce some complexities:
This multi-project structure require you to be able to trigger testing the hole product each time one of the project changes.
It surface the need of integration testing
Some other considerations are:
API server and Website server may run on the same machine but in different ports.
You may secure your API server using SSL (on node using the standard https module) but notice that in all cases you need another actor in front of the API server (a website proxying requests to the actual API server of a API gateway that implement cross-cutting concerns like authentication, rate limiting, etc). In the past I pose the same question you have made yourself regarding the apropriate of using SSL in this scenario and the answer is here. My answer is: depends on the deployment conditions.

How to use aws s3 image url in node js lambda?

I am trying to use aws s3 image in lambda node js but it throws an error 'no such file or directory'. But I have made that image as public and all permissions are granted.
fs = require('fs');
exports.handler = function( event, context ) {
var img = fs.readFileSync('https://s3-us-west-2.amazonaws.com/php-7/pic_6.png');
res.writeHead(200, {'Content-Type': 'image/png' });
res.end(img, 'binary');
};
fs is node js file system core module. It is for writing and reading files on local machine. That is why it gives you that error.
There are multiple things wrong with your code.
fs is a core module used for file operations and can't be used to access S3.
You seem to be using express.js code in your example. In lambda, there is no built-in res defined(unless you define it yourself) that you can use to send response.
You need to use the methods on context or the new callback mechanism. The context methods are used on the older lambda node version(0.10.42). You should be using the newer node version(4.3.2 or 6.10) which return response using the callback parameter.
It seems like you are also using the API gateway, so assuming that, I'll give a few suggestions. If the client needs access to the S3 object, these are some of your options:
Read the image from S3 using the AWS sdk and return the image using the appropriate binary media type. AWS added support for binary data for API gateway recently. See this link OR
Send the public S3 URL to client in your json response. Consider whether the S3 objects need to be public. OR
Use the S3 sdk to generate pre-signed URLs that are valid for a configured duration back to the client.
I like the pre-signed URL approach. I think you should check that out. You might also want to check the AWS lambda documentation
To get a file from S3, you need to use the path that S3 give you. The base path is https://s3.amazonaws.com/{your-bucket-name}/{your-file-name}.
On your code, you must replace the next line:
var img = fs.readFileSync('https://s3.amazonaws.com/{your-bucket-name}/pic_6.png');
If don't have a bucket, you should to create one to give permissions.

How to accept a file and then store it in cloud storage

I am using expressjs (all newest versions of express, node, and npm). I have create a route such as this:
router.post("/", function(req, res, next) {
});
This route will need to be able to have a file (image/video/docx,etc) uploaded and needs to then be stored on a cloud storage service (Google Storage). I do not want to store anything on the server that express is running on, just want to receive the file and pass it on over to Google Cloud Storage. I see there are some libraries which do this in addition to express, but I couldn't not find how to do it using just express.
I think your clients might be able to upload directly to GCS by constructing a HTML form. Basically you can create a signed url and embed it in the form and then on submit, the upload goes straight to GCS and your app doesn't need to handle it at all.
See: https://cloud.google.com/storage/docs/xml-api/post-object

Resources