In order to properly build my urls in my xml sitemaps and rss feeds I want to determine if the webpage is currently served over http or https, so it also works locally in development.
export default function handler(req, res) {
const host = req.headers.host;
const proto = req.connection.encrypted ? "https" : "http";
//construct url for xml sitemaps
}
With above code however also on Vercel it still shows as being served over http. I would expect it to run as https. Is there a better way to figure out http vs https?
As Next.js api routes run behind a proxy which is offloading to http the protocol is http.
By changing the code to the following I was able to first check at what protocol the proxy runs.
const proto = req.headers["x-forwarded-proto"];
However this will break the thing in development where you are not running behind a proxy, or a different way of deploying the solution that might also not involve a proxy. To support both use cases I eventually ended up with the following code.
const proto =
req.headers["x-forwarded-proto"] || req.connection.encrypted
? "https"
: "http";
Whenever the x-forwarded-proto header is not present (undefined) we fall back to req.connection.encrypted to determine if we should serve on http vs https.
Now it works on localhost as well a Vercel deployment.
my solution:
export const getServerSideProps: GetServerSideProps = async (context: any) => {
// Fetch data from external API
const reqUrl = context.req.headers["referer"];
const url = new URL(reqUrl);
console.log('====================================');
console.log(url.protocol); // http
console.log('====================================');
// const res = await fetch(`${origin}/api/projets`)
// const data = await res.json()
// Pass data to the page via props
return { props: { data } }
}
Related
I am using ghost, i made an integration and i would like to hide the api key from the front-end. I do not believe i can set restrictions on the ghost cms (that would also work). And i do believe so +page.js files are run on the browser also, so im a little confused on how to achieve this?
The interal sveltekit module $env/static/private (docs) is how you use secure API keys. Sveltekit will not allow you to import this module into client code so it provides an extra layer of safety. Vite automatically loads your enviroment variables from .env files and process.env on build and injects your key into your server side bundle.
import { API_KEY } from '$env/static/private';
// Use your secret
Sveltekit has 4 modules for accessing enviroment variables
$env/static/private (covered)
$env/static/public accessiable by server and client and injected at build (docs)
$env/dynamic/private provided by your runtime adapter; only includes variables with that do not start with the your public prefix which defaults to PUBLIC_ and can only be imported by server files (docs)
$env/dynamic/public provided by your runtime adapter; only includes variables with that do start with the your public prefix which defaults to PUBLIC_ (docs)
You don't need to hide the key.
Ghost Content API Docs:
These keys are safe for use in browsers and other insecure environments, as they only ever provide access to public data.
One common way to hide your third-party API key(s) from public view is to set up proxy API routes.
The general idea is to have your client (browser) query a proxy API route that you provide/host, have that proxy route query the third-party API using your credentials (API key), and pass on the results from the third-party API back to the client.
Because the query to the third-party API takes place exclusively on the back-end, your credentials are never exposed to the client (browser) and thus not visible to the public.
In your use case, you would have to create 3 dynamic endpoint routes to replicate the structure of Ghost's API:
src/routes/api/[resource]/+server.js to match /posts/, /authors/, /tags/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
src/routes/api/[resource]/[id]/+server.js to match /posts/{id}/, /authors/{id}/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource, id } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/${id}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
src/routes/api/[resource]/slug/[slug]/+server.js to match /posts/slug/{slug}/, /authors/slug/{slug}/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource, slug } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/slug/${slug}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
Then all you have to do is call your proxy routes in place of your original third-party API routes in your app:
// very barebones example
<script>
let uri;
let data;
async function get() {
const res = await fetch(`/api/${uri}`);
data = await res.json();
}
</script>
<input name="uri" bind:value={uri} />
<button on:click={get}>GET</button>
{data}
Note that using proxy API routes will also have the additional benefit of sidestepping potential CORS issues.
I'm using the fetch API module in my Philips Hue project and when I make a call to the local ip address (my hub) it produces that error in the title.
const fetch = require('node-fetch');
const gateway = "192.168.0.12";
const username = "username";
let getLights = function(){
fetch(`https://${gateway}/api/${username}/lights`, {
method: 'GET'
}).then((res) => {
return res.json();
}).then((json) => {
console.log(json);
});
}
module.exports = {getLights};
Any SECURE fix this will eventually go onto the public internet for me to access my lights from anywhere sooo?
To skip the SSL tests, you can use this:
process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = 0;
It seems like you tried to access it using HTTPS. Most likely on your local network it is going to be HTTP
So by changing https://${gateway}/api/${username}/lights to http://${gateway}/api/${username}/lights should work.
If you're trying to keep it HTTPS then you will have to install a SSL certificate authority onto your network.
These may be useful sources if you're trying to get that done:
https://www.freecodecamp.org/news/how-to-get-https-working-on-your-local-development-environment-in-5-minutes-7af615770eec/
https://letsencrypt.org/docs/certificates-for-localhost/
I am trying to connect my reacetivesearch application to external elasticsearch provider( not AWS). They dont allow making changes to the elasticsearch cluster and also using nginx in front of the cluster .
As per the reacetivesearch documentation I have cloned the proxy code and only made changes to the target and the authentication setting(as per the code below ) .
https://github.com/appbaseio-apps/reactivesearch-proxy-server/blob/master/index.js
Proxy is successfully starting and able to connect the remote cluster . However when I connect reacetivesearch app through proxy I get the following error.
Access to XMLHttpRequest at 'http://localhost:7777/testing/_msearch?' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource
I repeated the same steps with my local elasticsearch cluster using the same proxy code and getting the same error .
Just was wondering do we need to make any extra changes to make sure the proxy sending the right request to the elasticsearch cluster ? I am using the below code for the proxy.
const express = require('express');
const proxy = require('http-proxy-middleware');
const btoa = require('btoa');
const app = express();
const bodyParser = require('body-parser')
/* This is where we specify options for the http-proxy-middleware
* We set the target to appbase.io backend here. You can also
* add your own backend url here */
const options = {
target: 'http://my_elasticsearch_cluster_adddress:9200/',
changeOrigin: true,
onProxyReq: (proxyReq, req) => {
proxyReq.setHeader(
'Authorization',
`Basic ${btoa('username:password')}`
);
/* transform the req body back from text */
const { body } = req;
if (body) {
if (typeof body === 'object') {
proxyReq.write(JSON.stringify(body));
} else {
proxyReq.write(body);
}
}
}
}
/* Parse the ndjson as text */
app.use(bodyParser.text({ type: 'application/x-ndjson' }));
/* This is how we can extend this logic to do extra stuff before
* sending requests to our backend for example doing verification
* of access tokens or performing some other task */
app.use((req, res, next) => {
const { body } = req;
console.log('Verifying requests ✔', body);
/* After this we call next to tell express to proceed
* to the next middleware function which happens to be our
* proxy middleware */
next();
})
/* Here we proxy all the requests from reactivesearch to our backend */
app.use('*', proxy(options));
app.listen(7777, () => console.log('Server running at http://localhost:7777 🚀'));
Regards
Yep you need to apply CORS settings to your local elasticsearch.yaml as well as your ES service provider.
Are you using Elastic Cloud by any chance? They do allow you to modify Elasticsearch settings.
If so:
Login to your Elastic Cloud control panel
Navigate to the Deployment Edit page for your cluster
Scroll to your '[Elasticsearch] Data' deployment configuration
Click the User setting overrides text at the bottom of the box to expand the settings editor.
There's some example ES CORS settings about halfway down the reactivebase page that provide a great starting point.
https://opensource.appbase.io/reactive-manual/getting-started/reactivebase.html
You'll need to update the provided http.cors.allow-origin: setting based on your needs.
I was looking through my codebase today, the portion which sets up the server and found the following lines:
var https = require('https');
https.globalAgent.options.secureProtocol = 'TLSv1_2_method';
function createHttpsServer(app) {
var https = require('https');
var fs = require('fs');
const options = {
secureProtocol: 'TLSv1_2_method',
// ...
};
var server = https.createServer(options, app);
return server;
}
It looked like code duplication to me and I am not sure why these do different things (or do they?).
A colleague of mine told me that the top one is for controlling TLS in HTTPS requests made from NodeJS, which in turn, gives us access to the https.agent which is used for all things related to client HTTP requests.
This was also compared to the ServicePointManager in the .NET world.
So do these methods both do different things? At some point, our code does:
var server = protocol === 'https' ? createHttpsServer(app) : createHttpServer(app);
Wouldn't that be using the same server at the end of the day?
var server = protocol === 'https' ? createHttpsServer(app) : createHttpServer(app);
The above line creates the same server, the only difference is if the protocol is 'https' it will run on HTTPS server (this require SSL certificate) whereas if the protocol is http it will run on HTTP server.
This question already has answers here:
How do I host multiple Node.js sites on the same IP/server with different domains?
(13 answers)
Closed 4 years ago.
I would like to create a system of multiple web applications (probably around 3-5 node.js + express applications) on one server. I also only have one domain. So I figured that I need to create subdomains for each of my application apart from the main one.
My question is - how do I redirect users coming to certain subdomains to the right application? Do I need to use virtual machines and then redirect each user to a different VM (ip address) depending on their subdomain? How would I even do that?
Or could I just run every application on the same server just with a different port number? Or is there any other way that I'm not really thinking of?
Which way would be the cleanest and how would I implement it?
A very common way to achieve this is to run each of your node servers on a different port, and then set up a reverse proxy like nginx to have it forward requests based on matching the host header of incoming HTTP requests.
You could of course handle this manually with node, by checking the host header yourself and forwarding each request to the proper node server on the associated port.
Here is some Node code which illustrates what I'm referring to:
const http = require('http')
const url = require('url')
const port = 5555
const sites = {
exampleSite1: 544,
exampleSite2: 543
}
const proxy = http.createServer( (req, res) => {
const { pathname:path } = url.parse(req.url)
const { method, headers } = req
const hostname = headers.host.split(':')[0].replace('www.', '')
if (!sites.hasOwnProperty(hostname)) throw new Error(`invalid hostname ${hostname}`)
const proxiedRequest = http.request({
hostname,
path,
port: sites[hostname],
method,
headers
})
proxiedRequest.on('response', remoteRes => {
res.writeHead(remoteRes.statusCode, remoteRes.headers)
remoteRes.pipe(res)
})
proxiedRequest.on('error', () => {
res.writeHead(500)
res.end()
})
req.pipe(proxiedRequest)
})
proxy.listen(port, () => {
console.log(`reverse proxy listening on port ${port}`)
})