I'm sure this question is very basic but it's confusing me.
Currently in development mode my backend runs in localhost:1337 (using Strapi cms to be precise).
But what will it turn into when the website is deployed to a web server?
Specifically, the website will reside in a sub-directory and not in the root, will it affect this?
Currently this is a url I'm using inside of my project as images src:
localhost:1337/uploads/image-example.jpg
I'm trying to understnad what will be the new url once the project has been deployed, and what's more it's deployed in a subdirectory and not in the root
Thanks in advance
You shouldn't have to do anything.
Most, if not all, web hosts use the PORT environment variable to configure the application server. I just did a quick code search through Strapi and found this:
"port": "${process.env.PORT || 1337}",
to read the environment variable or use a default value (instead of asking for a random port with 0, for predictability while developing). If you used their starter script, I would assume your application has the same.
In many cases, this is sufficient, with two exceptions.
If your application server doesn't know its URL, it can only generate relative URLs. For example, it won't be able to compose an email containing a link to itself.
HTTP redirects required an absolute URL once upon a time, but that has since been amended.
Setting just the port assumes that the application will be mounted on /. If, for example, you were configuring your own web server (instead of Heroku), and you wished to mount your application on /blog, you would need some way to tell your application so that it can put the right prefix on paths.
It might be possible to get around this by restricting the URLs to relative paths only, but that sounds tedious and I don't think it's usually done. And an incoming request doesn't actually have enough information to discern the URL that the user is looking at. Instead, you show the URL to your app out of band.
For example, Grafana has a root_url variable in its config file where you put the full URL prefix, Prometheus takes a -web.external-url argument, etc.
However, I don't think Heroku can mount your application anywhere other than /, so the port should be sufficient.
Here's a complete server that runs on Heroku and shows its environment to you:
const app = require("express")();
app.set("json spaces", 2);
app.get("/", (req, res) => {
res.send(process.env);
});
const server = app.listen(process.env.PORT, () => {
console.log('server is up on', server.address());
});
The PORT variable is all the configuration it needs; the environment does not include the app name.
Hardcoding the production URL of your app can become a maintenance burden, so make sure it remains configurable with a command line argument or environment variable if you do end up teaching your app what its URL is.
So to answer your question specifically, your image src should begin with /uploads.
Related
I'm developing a full-stack web app, which has both a frontend and a backend. In production, my frontend will be hosted at https://frontend.com, and my backend at https://backend.com. It seems reasonable to hardcode the address https://backend.com in my frontend code.
However, when I am testing the app locally, I'll want to spin up the backend at, say, localhost:8000, and redirect my frontend to this address instead. My question is: what's the easiest/most elegant way to do this?
One approach is to manually change the frontend code to point to localhost:8000 for local testing, and then change it back before deploying. However, this is quite annoying, and it's too easy to forget to change it back.
An approach I've used in the past is:
Create a file server.js containing:
const LOCAL_SERVER = "http://localhost:8000"
Import this file in my frontend HTML:
<script src="server.js"></script>
Set a fallback to the remote/production server in my JS scripts:
let SERVER = typeof LOCAL_SERVER === 'undefined' ? 'https://backend.com' : LOCAL_SERVER
Add server.js to my .gitignore.
This works, but it feels kind of hacky, and it pollutes my production code with references to this (possibly non-existent) server.js and LOCAL_SERVER constant. I'm wondering if anyone has a better way.
If your real backend & local backend use the same port, a simple solution is to add the following line to your hosts file:
127.0.0.1 backend.com
This seems a bit more challenging than just generally getting the url + path due to express being served from a secondary path.
I'm trying to dynamically create redirect/callback URL's for passport based OpenID integrations.
So far, I've successfully gotten as much using the req.host as directed in other stackoverflow threads:
callbackURL: "https://" + req.get("host") + "/auth/apple/redirect",
I do not need protocol as I'm only dealing with https all the time.
The issue is, my function is being served from:
https://europe-west2-projectname.cloudfunctions.net/MY_FUNCTION_NAME/
and therefore using just host, the url above will not be a valid redirect URL that Oauth can use.
I am doing this as I have 1 single app running off of 2 different functions depending if i'm in staging OR production.
To further add insult to injury -- firebase functions within the same app seem to share their environmental variables so I can't really count on those either.
According to the Google Cloud documentation:
node 6 and 8 offer FUNCTION_NAME as an environment variable that indicates which function is being run.
node 10 offers FUNCTION_TARGET.
I've followed the answer of this: Redirect from http to https in google cloud but it does not seem to be currently accurate any more. The anchor referenced ( https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml#security ) seems to have been removed but without a note of a replacement.
For reference, I am serving NodeJS over a Google App (flex) Engine. As per the answer I've got in my app.yaml:
handlers:
- url: /.*
script: IGNORED
secure: always
Since HTTPS is obviously terminated before it hits my Express engine (and redirection on there would be useless); how is it currently correctly implemented?
Potentially helpful, I have an external domain attached via the "Custom domains" tab in the console, and there is indeed a SSL certificate configured (so if a user manually goes to https://.com everything is fine)
The flexible environment does not current support handlers in the app.yaml. If you want https:// redirection, you have a few options:
Use helmet to do to HSTS stuff for you, and implement your own initial redirect.
I wrote a happy little library to always forces SSL on all routes for express yes-https
We are considering auto-redirecting all traffic to SSL by default. Do you think that would be a good thing for your apps?
Pulling Justin's yes-https library, I was able to get this to work:
var app = express();
app.use(function(req, res, next){
if (req.host != 'localhost' && req.get('X-Forwarded-Proto') == 'http') {
res.redirect(`https://${req.host}${req.url}`);
return;
}
app.router(req, res, next);
});
At first I thought I had to do that since I was on an appengine subdomain and couldn't use HSTS. Then I learned HSTS works fine for subdomains. :) Regardless, I thought people might want to see what the magic bit to use was if they didn't want to use yes-https for some reason.
Justin, auto-redirecting all traffic to SSL by default sounds great to me. I just spent hours trying to figure out how to do so before I found this post because I was trying to get my app to get Chrome's add to homescreen install banner as per https://developers.google.com/web/fundamentals/engage-and-retain/app-install-banners/.
GCP This should be as easy to just use the gcloud app cli and configure a header (Strict-Transport-Security) or redirect rule. Perhaps the push is to force us to Firebase Hosting instead which is forcing HTTPS already. For a quick solution for Single Page apps (static content) with React, Angular etc, we can use this JS snippet.
It ignores localhost environments. You can change localhost with a host name that you would like to exclude. It then redirects using https as protocol.
if ( location.host.indexOf("localhost") < 0 && location.protocol.toLowerCase() !== "https:"){
const url= `https://${location.host}`;
location.replace(url);
}
No matter which library I use for node, they all require absolute URLs.
This means I need to either build a fetch curry, then pass that fetch through function chains to be able to make a request, OR I need to make a constant that is defined after figuring out whether im in production or development, then update that URL whenever my environments change.
Regardless, why does node require the hostname for URL's?
Is there a more seamless way to do server side requests anywhere in my app (including deep in function chains)?
A code example:
node index.js
index.js
let app = Express();
app.use('ace', (req, res) => foo());
app.use('somedata.json', (req, res) => res.status(200).send('{"hello": "world"}'));
foo.js
() => bar();
bar.js
() => bat();
bat.js
() => fetch('/somedata.json').then(console.log);
There is no such thing as a relative HTTP request. Any actual HTTP request must be a fully qualified URL. That's the HTTP specification. Because HTTP is stateless, there is no "base path" state associated with a given host.
Relative requests exist in a browser-type environment, but that's only because the browser pre-processes the request and, if it's not absolute, the browser adds on a default base path to make it absolute. All requests coming from a browser to a server are actually absolute URLs.
You could write your own function that did the same thing the browser does. You set a base path on it and then you direct all your requests through this function that checks the URL sent for the request and if it's not a fully qualified path, then the base path is added to it.
We could help you more specifically if you showed a real code example of the problem you're trying to solve. If it's one of dev or production environment, then you would typically establish some variables at startup base don which type of environment you're running in and then just have your code use those variables whenever it forms a request.
I have frontend content that needs backend REST APIs to function. The APIs allow for cross-origin resource sharing (CORS). Typically we run the complete stack locally, including a user-mode Nginx instance tailored for development use which serves the frontend content. The full stack however is a bit too much to expect part time contractors to wrangle. So I'd like an approach very basic they can use to be effective and get stuff done.
Their current solution is horrible:
var port = location.port;
// base url of backend API
var url = window.location['origin'];
if (port != '443') {
// assume we're running in "development" mode against a staging server
url = "https://staging-server.somewhere.com";
}
Apart from the fact that this is furthering frontend content that is a bit kludgey as it is - it precludes the static content from being hosted in a variety of other ways, including a suite of functional and integration tests.
I have some ideas, like having them run a small web server that proxies to the backend APIs, but what I would really like is something simpler that allows me to default url in a less kludgey way. Ideally, there would be some manner of configuring url from a file ignored by version control (e.g., .gitignore).
I was able to create a solution that works for all manner of local development as well as production releases.
I created some JavaScript, apiurl.js, that sits alongside all of our other JavaScript content. If the apiurl.jsfile is present, I read its responseText into eval(). The frontend can therefore change the URL based on the content of that file.
E.g., apiurl.js has:
var apiurl = "https://staging-server.somewhere.com";
And the JavaScript to handle the content:
eval(responseText);
if (typeof(apiurl) != undefined) {
url = apiurl;
}
The apiurl.js file is untracked by version control and not used in production.