I am having trouble serving my static files on Elastic Beanstalk using NodeJS deployed on Linux 2. My local environment works, but my deployment is unable to serve the static files located in a top-level static folder called 'public'.
My configuration is as follows:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/images: public/images
/javascripts: public/javascripts
/stylesheets: public/stylesheets
I am certain that the configuration is processed correctly because I can view the results of the static file configuration within AWS UI. When I navigate to the home directory of my site (using http:// protocol), the HTML page is loaded, but the CSS and JS under the public directory is not. The error I get is as follows:
GET https://<domain name>/stylesheets/layout.css net::ERR_CONNECTION_TIMED_OUT
Note that the https:// protocol is used. From my understanding, the reason my local environment works is that my application serves the static files with the correct protocol. Here are my questions:
Why are my static files being served with protocol https:// when I request my home directory using http://?
I don't want to serve my static files through the application to reduce the number of requests to my application, noted here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-environmentproxystaticfiles. Is there anything actually wrong with the configuration?
Issue was resolved. I am using Helmet JS for Content Security Policy (CSP), and it has a directive for converting insecure requests to secure ones: upgrade-insecure-requests. Make sure to remove that in the development phase for a site that is relying on http:// for content. Best practice is to use https:// when possible.
Related
There is a stack that uses Next.js as the main dependency. Each page is an independent application. An index page uses www.domain.com/_next/*.js so its base path configured as a / root. Also, another application has a base path same as the homepage but a little difference which is using a query string in the URL.
if the URL is www.domain.com/ then it is a homepage, if the URL is www.domain.com?key=value it means it is a different page and the request will be redirected to the associated application by the Nginx and Load Balancers. So, the problem is:
the main page is serving its statics under the www.domain.com/_next/*.js
the page that parses query strings is serving its statics under the same location same as MainPage www.domain.com/_next/*.js
These applications have different statics created by different pipelines also there is a cache mechanism. Is there a way to solve that conflict just by making some configuration in Next.js?
You can use the basePath settings introduced in Next.js 9.5 (https://nextjs.org/docs/api-reference/next.config.js/basepath)
But your .next/ folder would be changed, something like yourdomain.com/yourbasepath/_next
If you want to change only the .next/ folder location assetPrefix can be used. I think you can take a look at assetPrefix https://github.com/vercel/next.js/issues/5602#issuecomment-673382891
Or we can setup the custom server, for example the Express and custom the base asset path via Express (https://expressjs.com/en/starter/static-files.html)
app.use('/static', express.static('public'))
P/s: The asker choose the assetPrefix solution
I'm running NodeJS for my server-side javascripting, but serving my pages with Apache.
My pages currently reference Socket.IO locally, in that they load the node_modules/socket.io-client/dist/socket.io.js from the /var/www/html
The NodeJS index.js file also resides in /var/www/html which has become a problem for me.
Can I move my NodeJS index.js file to var/www so it is no longer publicly accessible, without needing to move node_modules from var/www/html which Socket.IO is relying on to be publicly accessible?
When using Node.js to serve the Webpage:
Your servers root directory is typically not publicly visible. Requests to your server get handled by the routes you set up in the index.js file. By default, no files are accessible. However, if you need a public folder (e.g. for the favicon file or an index.html file), I would recommend creating a subfolder in your root directory and use for example express to make it available.
When not using Node.js to serve the Webpage:
If you need Node.js for client-side logic, you should just use normal javascript (for example this in the case of WebSockets). Node.js is a serverside application where you run javascript on the server. So on the client-side, there is no need for Node.js. If you need certain npm packages, SubStack on GitHub has a module called node-browserify. It will compress and bundle your modules and deliver it as a single js file, but you use it just like Node.js.
If you need Node.js for server-side logic, then there is no need to make it publicly available and you should change your current server configuration to not make it accessible from the browser.
This may seem like an odd/broad question, but how does a server know not to render Express.js files and not to expose the content similar to how anyone can see a javascript file, and read the script being executed. Do node servers like Heroku protect them ? Sorry just new to express and node. Is it similar to how PHP syntax/scripts are hidden and protected in a Apache server?
It depends on the server configuration. On a poorly configured server, the .js files might be accessible.
With a nodejs/expressjs server you define a base folder that contains public files, e.g. public and files outside of that public folder are not visible, because the server doesn't serve them to the outside. If you configure the wrong directory, e.g. ., then the expressjs code files would be available to browsers and would be rendered as-is to them, potentially revealing unsafe data like configuration, passwords and so on. Since the default configuration and all code examples make sure that public is defined as the public folder, the risk of accidental misconfiguration is low.
If you run an apache httpd or other webserver on the same host, you have to make sure that the node application is not inside the webroot of any vhost, otherwise the files might also be visible, because to the apache httpd they also look like simple static files, ready to be sent as-is to the browser.
It is different from PHP files, at least in the case of apache httpd or nginx, because those are usually configured so that PHP files are files to be executed, not static files to be served to the outside. However, if the apache httpd or nginx doesn't know about PHP, either because it isn't installed or isn't configured, then PHP files inside the webroot would also be shown to the public as-is. Display of files for the apache httpd can be prevented using .htaccess files.
I have managed to configure my Nginx (on top of Nodejs) to serve static files without the html extension (e.g. going to site.com/about serves the about.html page) - with help from these past questions: how to serve html files in nginx without showing the extension in this alias setup and https://serverfault.com/questions/346994/hide-html-file-extensions-using-nginx-rewrites
But I am unable to figure out how to set up Cloudflare page rules to work with this setup (the current page rules are setup to include static html files as well as js, css, etc.).
How do I configure cloudflare to serve the about.html page when the user goes to site.com/about, and also serve the team.html page when the user goes to site.com/about/team? Do I need to do anything special, or is the Nginx setup sufficient?
If CloudFlare caching of your static pages isn't required, there's no need for you do do anything, everything should work out of the box.
If you want CloudFlare to also cache those static pages, try setting up page rules to Cache Everything on your site:
Domain > Page Rules
Pattern: *site.com/*
Custom Caching > Cache everything
Once you setup the page rules, CloudFlare should cache your static pages and site.com/page1 should work. To clarify, your server is still serving the pages, not CloudFlare. With the page rules, you are simply instructing CF to cache what your server sends for site.com/page1, as opposed to fetching the page from your server for every visitor.
You can then add other Page Rules with higher priorities should you want to exclude certain endpoints from caching (e.g. an admin section). You won't need to do this if you're just hosting static HTML.
If this doesn't work, or if you need more control over what's being cached, check this CloudFlare support doc for more options.
Good luck!
I am new to HAProxy as well as OpenShift. Following is the setup I am trying to do - serve blog through Ghost(a NodeJS app), static website files through PHP cartridge(I assume this is the best way for serving static HTML/JS on OpenShift) and actual application. I would like to route requests to specific gear based on the URL.
I want to confirm if this is the correct way to set it up. Could you please give some pointers about the HAProxy configuration for this?
I think that rather than do that in the haproxy it would be worth either running a separate gear for your static assets, or using Amazon S3 or CloudFront for static assets.