I'm curious about an aspect of Google Cloud static hosting (via a bucket) and wonder if anybody here has knowledge of it.
Before I moved my website over to GCP, I used an .htaccess file to rewrite files such as index.html to display when requested without .html
I found that this looked better and would display my site as www.domain.com/index rather than www.domain.com/index.html
I also used my .htaccess file to forced http requests to https
I know this is impossible with GCP, as htaccess files won't be read in a bucket. I read that app.yaml file will do this same thing, however, it's my understand that app.yaml is used by an App Engine. I host my website in bucket, and use a load balancer to allow https request.
If I create an app.yaml file and place it in my bucket, is it possible to get the same results I had with htaccess? Anybody have any suggestions?
Thanks.
app.yaml is used to configure App Engine apps's settings [1]. For hosting a website in a bucket, it is not related. To achieve your goal, you can host your website in App Engine and configure the app.yaml.
As a workaround, you can have a javascript in your bucket to redirect to another URL [2].
Related
I am having trouble serving my static files on Elastic Beanstalk using NodeJS deployed on Linux 2. My local environment works, but my deployment is unable to serve the static files located in a top-level static folder called 'public'.
My configuration is as follows:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/images: public/images
/javascripts: public/javascripts
/stylesheets: public/stylesheets
I am certain that the configuration is processed correctly because I can view the results of the static file configuration within AWS UI. When I navigate to the home directory of my site (using http:// protocol), the HTML page is loaded, but the CSS and JS under the public directory is not. The error I get is as follows:
GET https://<domain name>/stylesheets/layout.css net::ERR_CONNECTION_TIMED_OUT
Note that the https:// protocol is used. From my understanding, the reason my local environment works is that my application serves the static files with the correct protocol. Here are my questions:
Why are my static files being served with protocol https:// when I request my home directory using http://?
I don't want to serve my static files through the application to reduce the number of requests to my application, noted here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-environmentproxystaticfiles. Is there anything actually wrong with the configuration?
Issue was resolved. I am using Helmet JS for Content Security Policy (CSP), and it has a directive for converting insecure requests to secure ones: upgrade-insecure-requests. Make sure to remove that in the development phase for a site that is relying on http:// for content. Best practice is to use https:// when possible.
I am trying to create a serverless Nodejs application. I deployed my app to S3 and it is running. But the URL seems to be https://xxxxxxxx.xxxxx-api.us-west-2.amazonaws.com/prod. So, whenever my app is trying to get css files or video files or even the links they are going to URL/ which is perfect. But here the url is URL/prod, it should be like URL/prod/whichever-route-it-is So, my static files are not being rendered and even the links are not working. Is there any way to re route to URL/prod or remove the prod from api endpoint?
AWS API Gateway follows this pattern in the URL:
www.example.com/my-base-path/MyStage
You must configure each part accordingly. For example, you may create a base path "prod", and stage "whichever-route-it-is" for something like:
www.example.com/prod/whichever-route-it-is
Base Path Mappings are configured under Custom Domain Names.
Move the folders holding your static files into a /prod/ subfolder in S3.
Or, use Cloudfront to route /css/, /js/ or whatever static paths to the S3 bucket and skip API gateway
Or, configure API Gateway to overwrite the path for /css/, /js/ or whatever
I prefer moving files to the subfolder, or Cloudfront solutions. This allows control over different stages of the API. For example, when you are ready to launch v2, but not completely take down v1, you would want to have prod/ and prod_v2/ available. This way, your folder structure maps to the API Gateway stages nicely.
I have a React SPA that is being hosted as an Azure Static Website. The configuration is rather simple - html, js etc files are deployed to Azure Storage. I then enable the static website feature and expose this via a Verizon Premium CDN Endpoint.
The Static Website is configured to serve index.html as the index and error document. The issue that I am seeing here is that when a route is requested /faqs for example the response is a 404 with the index.html doc as the response body - this works fine in the browser but Google will not crawl it as it's seeing the response as a 404.
I wonder if there is anyway around this? Is there anyway to force 2** response codes?
Well after messing around trying to configure Azure to force status codes I found a solution, it's not ideal but it works and will be fine for now.
SOLUTION: I cloned my index.html as faqs (no extension so manually set content type) so that the respective version is served when requested. Happy days! Glad I only have a small number of public pages.
Since you have the CDN layer in front of your website, you can have the CDN deliver the index.html via a URL rewrite rather than relying on the static website's "error page" delivery mechanism. This holds up even if you have a variable number of routes in your application.
Configure a rule in your CDN's Rules Engine that takes any path without a file extension (since we want normal requests for assets or script/style files to return those actual files) and rewrites to /index.html. Re-write means the URL of the actual request remains the same, but the file that gets delivered comes from the rewritten URL.
See this article for more.
I'm trying to limit access to a directory based on the results of a php script. I have the following in my .htaccess folder where the files are located:
RewriteCond %{REQUEST_URI} !=league_access.php
RewriteRule .* league_access.php
I have also tried:
RewriteEngine on
RewriteRule .* league_access.php
If you go to the directory http://www.bowling-tracker.com/bowl/league_documents/1/ you will note that it is firing the league_access.php script (as it currently only types "Running the Test Script
Restricted access" to the page.
So that is acting correctly.
http://www.bowling-tracker.com/bowl/league_documents/1/test.html you will see that you're granted access to the page (rather than it going to the league_access.php script).
This website is on FastComet (public hosting company) so I cannot change server settings or files except the .htaccess file.
Any help to resolve this would be greatly appreciated.
Thanks....
FastComet Team here! Part of our shared hosting environment is utilizing NginX as a reverse proxy to the Apache web service. This configuration gets the advantages of both services at the same time and ensures a better performance of your project. NginX is processing all requests for static content, such as PDF files or HTML pages. Here's a list of all file types that will be processed by the NginX service:
3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso
However, if the request is for dynamic content, such as a PHP script, it will be passed from the NginX to the Apache service. You are correctly setting the rule in question in the .htaccess file of your website, but this file is only read by the Apache service, not NginX. In other words, if there is a request for a static content, such as a PDF file here:
http://www.bowling-tracker.com/bowl/league_documents/1/Rules_Thurs_Night_Mixed.pdf
or an HTML page here:
http://www.bowling-tracker.com/bowl/league_documents/1/test.html
it will be processed by NginX without considering the .htaccess rules that you have set. There is an easy way of resolving that by excluding the processing of HTML, HTM and PDF types files for your domain or even your entire hosting account. This way, those requests will be processed by the Apache web server, instead of NginX. In this case, the .htaccess rules that you apply will be taken into consideration by the system and they will work without any issues.
I'm using S3 and CloudFront to store the images, CSS and JS files of my web site - which is not static and is hosted on a proper web server
Since the CSS file changes frequently, I'm using a version number to make sure the user browser reloads it when it changes. When I was hosting the CSS file on my Apache web server, I was using the following redirect rule
RewriteEngine On
# CSS Redirection (whatever.min.5676.css is redirected to whatever.min.css)
RewriteRule ^(.*)\.min\.[0-9]+\.css$ $1.min.css
With this simple rule, http://www.example.com/all.min.15.css redirected to http://www.example.com/all.min.css
How can I reproduce such a rule with Amazon S3 and/or CloudFront ?
i.e. to have http://example.amazonaws.com/mybucket/css/all.min.3.css or http://example.amazonaws.com/mybucket/css/all.min.42.css redirected to http://example.amazonaws.com/mybucket/css/all.min.css
(Note : my S3 bucket is NOT configured as a website but should it be so to enable redirection rules?)
NOTE: this answer does not use any rule. It might not be the proper answer.
I would be using a query parameter to handle different versions, like:
http://example.amazonaws.com/mybucket/css/all.min.css?ver42
http://example.amazonaws.com/mybucket/css/all.min.css?42
http://example.amazonaws.com/mybucket/css/all.min.css?ver=42
http://example.amazonaws.com/mybucket/css/all.min.css?20141014
To be exact, in my dynamic web page, the version parameter is stored in a variable and appended to url (both CSS and JS). While in development I only have to increase/set one variable to force the browser to load a new version. This way, there is no need for rewrite rules, even on Apache.
Caching also works as the Last-Modified and ETag headers are kept in tact.
Hope this helps.