How to Connect Reactivesearch to an external Elasticsearch cluster? - node.js

I am trying to connect my reacetivesearch application to external elasticsearch provider( not AWS). They dont allow making changes to the elasticsearch cluster and also using nginx in front of the cluster .
As per the reacetivesearch documentation I have cloned the proxy code and only made changes to the target and the authentication setting(as per the code below ) .
https://github.com/appbaseio-apps/reactivesearch-proxy-server/blob/master/index.js
Proxy is successfully starting and able to connect the remote cluster . However when I connect reacetivesearch app through proxy I get the following error.
Access to XMLHttpRequest at 'http://localhost:7777/testing/_msearch?' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource
I repeated the same steps with my local elasticsearch cluster using the same proxy code and getting the same error .
Just was wondering do we need to make any extra changes to make sure the proxy sending the right request to the elasticsearch cluster ? I am using the below code for the proxy.
const express = require('express');
const proxy = require('http-proxy-middleware');
const btoa = require('btoa');
const app = express();
const bodyParser = require('body-parser')
/* This is where we specify options for the http-proxy-middleware
* We set the target to appbase.io backend here. You can also
* add your own backend url here */
const options = {
target: 'http://my_elasticsearch_cluster_adddress:9200/',
changeOrigin: true,
onProxyReq: (proxyReq, req) => {
proxyReq.setHeader(
'Authorization',
`Basic ${btoa('username:password')}`
);
/* transform the req body back from text */
const { body } = req;
if (body) {
if (typeof body === 'object') {
proxyReq.write(JSON.stringify(body));
} else {
proxyReq.write(body);
}
}
}
}
/* Parse the ndjson as text */
app.use(bodyParser.text({ type: 'application/x-ndjson' }));
/* This is how we can extend this logic to do extra stuff before
* sending requests to our backend for example doing verification
* of access tokens or performing some other task */
app.use((req, res, next) => {
const { body } = req;
console.log('Verifying requests ✔', body);
/* After this we call next to tell express to proceed
* to the next middleware function which happens to be our
* proxy middleware */
next();
})
/* Here we proxy all the requests from reactivesearch to our backend */
app.use('*', proxy(options));
app.listen(7777, () => console.log('Server running at http://localhost:7777 🚀'));
Regards

Yep you need to apply CORS settings to your local elasticsearch.yaml as well as your ES service provider.
Are you using Elastic Cloud by any chance? They do allow you to modify Elasticsearch settings.
If so:
Login to your Elastic Cloud control panel
Navigate to the Deployment Edit page for your cluster
Scroll to your '[Elasticsearch] Data' deployment configuration
Click the User setting overrides text at the bottom of the box to expand the settings editor.
There's some example ES CORS settings about halfway down the reactivebase page that provide a great starting point.
https://opensource.appbase.io/reactive-manual/getting-started/reactivebase.html
You'll need to update the provided http.cors.allow-origin: setting based on your needs.

Related

How does one secure api keys on sveltekit 1.0

I am using ghost, i made an integration and i would like to hide the api key from the front-end. I do not believe i can set restrictions on the ghost cms (that would also work). And i do believe so +page.js files are run on the browser also, so im a little confused on how to achieve this?
The interal sveltekit module $env/static/private (docs) is how you use secure API keys. Sveltekit will not allow you to import this module into client code so it provides an extra layer of safety. Vite automatically loads your enviroment variables from .env files and process.env on build and injects your key into your server side bundle.
import { API_KEY } from '$env/static/private';
// Use your secret
Sveltekit has 4 modules for accessing enviroment variables
$env/static/private (covered)
$env/static/public accessiable by server and client and injected at build (docs)
$env/dynamic/private provided by your runtime adapter; only includes variables with that do not start with the your public prefix which defaults to PUBLIC_ and can only be imported by server files (docs)
$env/dynamic/public provided by your runtime adapter; only includes variables with that do start with the your public prefix which defaults to PUBLIC_ (docs)
You don't need to hide the key.
Ghost Content API Docs:
These keys are safe for use in browsers and other insecure environments, as they only ever provide access to public data.
One common way to hide your third-party API key(s) from public view is to set up proxy API routes.
The general idea is to have your client (browser) query a proxy API route that you provide/host, have that proxy route query the third-party API using your credentials (API key), and pass on the results from the third-party API back to the client.
Because the query to the third-party API takes place exclusively on the back-end, your credentials are never exposed to the client (browser) and thus not visible to the public.
In your use case, you would have to create 3 dynamic endpoint routes to replicate the structure of Ghost's API:
src/routes/api/[resource]/+server.js to match /posts/, /authors/, /tags/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
src/routes/api/[resource]/[id]/+server.js to match /posts/{id}/, /authors/{id}/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource, id } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/${id}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
src/routes/api/[resource]/slug/[slug]/+server.js to match /posts/slug/{slug}/, /authors/slug/{slug}/, etc.:
const API_KEY = <your_api_key>; // preferably pulled from ENV
const GHOST_URL = `https://<your_ghost_admin_domain>/ghost/api/content`;
export function GET({ params, url }) {
const { resource, slug } = params;
const queryString = url.searchParams.toString();
return fetch(`${GHOST_URL}/${resource}/slug/${slug}/?key=${API_KEY}${queryString ? `&${queryString}` : ''}`, {
headers: {
'Accept-Version': '5.0' // Ghost API Version setting
}
});
}
Then all you have to do is call your proxy routes in place of your original third-party API routes in your app:
// very barebones example
<script>
let uri;
let data;
async function get() {
const res = await fetch(`/api/${uri}`);
data = await res.json();
}
</script>
<input name="uri" bind:value={uri} />
<button on:click={get}>GET</button>
{data}
Note that using proxy API routes will also have the additional benefit of sidestepping potential CORS issues.

How to integrate OIDC Provider in Node jS

I tried to Integrate OIDC Provider to Node JS and I have a Sample Code. So, I run this Sample code it's throwing an error(unrecognized route or not allowed method (GET on /api/v1/.well-known/openid-configuration)).The problem is Issuer(https://localhost:3000) this Issuer is working fine. but i will change this Issuer((https://localhost:3000/api/v1/)) it's not working How to fix this Issue and I facing another issue also when I implement oldc-provider in node js. They Routes are override how to fix this issue
Sample.js
const { Provider } = require('oidc-provider');
const configuration = {
// ... see available options /docs
clients: [{
client_id: 'foo',
client_secret: 'bar',
redirect_uris: ['http://localhost:3000/api/v1/'],
true_provider: "pcc"
// + other client properties
}],
};
const oidc = new Provider('http://localhost:3000/api/v1/', configuration);
// express/nodejs style application callback (req, res, next) for use with express apps, see /examples/express.js
oidc.callback()
// or just expose a server standalone, see /examples/standalone.js
const server = oidc.listen(3000, () => {
console.log('oidc-provider listening on port 3000, check http://localhost:3000/api/v1/.well-known/openid-configuration');
});
Error
Defining Issuer Identifier with a path component does not affect anything route-wise.
You have two options, either mount the provider to a path (see docs), or define the actual paths you want for each endpoint to be prefixed (see docs).
I think you're looking for a way to mount, so the first one.

How to determine http vs https in nodejs / nextjs api handler

In order to properly build my urls in my xml sitemaps and rss feeds I want to determine if the webpage is currently served over http or https, so it also works locally in development.
export default function handler(req, res) {
const host = req.headers.host;
const proto = req.connection.encrypted ? "https" : "http";
//construct url for xml sitemaps
}
With above code however also on Vercel it still shows as being served over http. I would expect it to run as https. Is there a better way to figure out http vs https?
As Next.js api routes run behind a proxy which is offloading to http the protocol is http.
By changing the code to the following I was able to first check at what protocol the proxy runs.
const proto = req.headers["x-forwarded-proto"];
However this will break the thing in development where you are not running behind a proxy, or a different way of deploying the solution that might also not involve a proxy. To support both use cases I eventually ended up with the following code.
const proto =
req.headers["x-forwarded-proto"] || req.connection.encrypted
? "https"
: "http";
Whenever the x-forwarded-proto header is not present (undefined) we fall back to req.connection.encrypted to determine if we should serve on http vs https.
Now it works on localhost as well a Vercel deployment.
my solution:
export const getServerSideProps: GetServerSideProps = async (context: any) => {
// Fetch data from external API
const reqUrl = context.req.headers["referer"];
const url = new URL(reqUrl);
console.log('====================================');
console.log(url.protocol); // http
console.log('====================================');
// const res = await fetch(`${origin}/api/projets`)
// const data = await res.json()
// Pass data to the page via props
return { props: { data } }
}

How to remove authentication for introspection query in Graphql

so may be this is very basic question so please bear with me. Let me explain what I am doing and what I really need.
EXPLANATION
I have created a graphql server by using ApolloGraphql (apollo-server-express npm module).
Here is the code snippet to give you an idea.
api.js
import express from 'express'
import rootSchema from './root-schema'
.... // some extra code
app = express.router()
app.use(jwtaAuthenticator) // --> this code authenticates Authorization header
.... // some more middleware's added
const graphQLServer = new ApolloServer({
schema: rootSchema, // --> this is root schema object
context: context => context,
introspection: true,
})
graphQLServer.applyMiddleware({ app, path: '/graphql' })
server.js
import http from 'http'
import express from 'express'
import apiRouter from './api' // --> the above file
const app = express()
app.use([some middlewares])
app.use('/', apiRouter)
....
....
export async function init () {
try {
const httpServer = http.createServer(app)
httpServer
.listen(PORT)
.on('error', (err) => { setTimeout(() => process.exit(1), 5000) })
} catch (err) {
setTimeout(() => process.exit(1), 5000)
}
console.log('Server started --- ', PORT)
}
export default app
index.js
require('babel-core')
require('babel-polyfill')
require = require('esm')(module/* , options */)
const server = require('./server.js') // --> the above file
server.init()
PROBLEM STATEMENT
I am using node index.js to start the app. So, the app is expecting Authorization header (JWT token) to be present all the times, even for the introspection query. But this is not what I want, I want that introspection query will be resolvable even without the token. So that anyone can see the documentation.
Please shed some light and please guide what is the best approach to do so. Happy coding :)
.startsWith('query Introspection') is insecure because any query can be named Introspection.
The better approach is to check the whole query.
First import graphql and prepare introspection query string:
const { parse, print, getIntrospectionQuery } = require('graphql');
// format introspection query same way as apollo tooling do
const introspectionQuery = print(parse(getIntrospectionQuery()));
Then in Apollo Server configuration check query:
context: ({ req }) => {
// allow introspection query
if (req.body.query === introspectionQuery) {
return {};
}
// continue
}
There's a ton of different ways to handle authorization in GraphQL, as illustrated in the docs:
Adding middleware for express (or some other framework like hapi or koa)
Checking for authorization inside individual resolvers
Checking for authorization inside your data models
Utilizing custom directives
Adding express middleware is great for preventing unauthorized access to your entire schema. If you want to allow unauthenticated access to some fields but not others, it's generally recommended you move your authorization logic from the framework layer to the GraphQL or data model layer using one of the methods above.
So finally I found the solution and here is what I did.
Let me first tell you that there were 2 middle-wares added on base path. Like this:
app //--> this is express.Router()
.use(jwtMw) // ---> these are middlewares
.use(otherMw)
The jwtMw is the one that checks the authentication of the user, and since even introspection query comes under this MW, it used to authenticate that as well. So, after some research I found this solution:
jwtMw.js
function addJWTMeta (req, res, next) {
// we can check for null OR undefined and all, then check for query Introspection, with better condition like with ignore case
if (req.body.query.trim().startsWith('query Introspection')) {
req.isIntrospection = true
return next()
}
...
...
// ---> extra code to do authentication of the USER based on the Authorization header
}
export default addJWTMeta
otherMw.js
function otherMw (req, res, next) {
if (req.isIntrospection) return next()
...
...
// ---> extra code to do some other context creation
}
export default otherMw
So here in jwtMw.js we are checking that if the query is Introspection just add a variable in req object and move forward, and in next middleware after the jwtMw.js whosoever wants to check for introspection query just check for that variable (isIntrospection, in this case) and if it is present and is true, please move on. We can add this code and scale to every middleware that if req.isIntrospection is there just carry on or do the actual processing otherwise.
Happy coding :)

Wildcard subdomain info sharing between node server and Nuxt/Vue client

We are building a multi_tenant solution with NodeJS/Express for the back end and VueJS/Nuxt for the front-end. Each tenant will get their own subdomain like x.mysite.com, y.mysite.com, etc.
How can we make both our back end and front-end read the subdomain name and share with each other?
I have some understanding that in the Vue client, we can read suvdomain using window.location. But I think that's too late. Is there a better way? And what about the node /express setup? How do we get the suvidhaon info there?
Note that Node/Express server is primarily an API to interface with database and for authentication.
Any help or insight to put us on the right path is appreciated.
I'm doing something similar in my app. My solution looks something like this...
Front End: In router.vue, I check the subdomain to see what routes to return using window.location.host. There is 3 options
no subdomain loads the original routes (mysite.com)
portal subdomain loads the portal routes (portal.mysite.com)
any other subdomain loads the routes for the custom client subdomain, which can be anything and is dynamic
My routes for situation #3 looks like this:
import HostedSiteHomePage from 'pages/hostedsite/hosted-site-home'
export const hostedSiteRoutes = [
{ path: '*', component: HostedSiteHomePage }
]
The asterisk means that any unmatched route will fallback to it.
In your fallback page (or any page), you will want this (beforeMount is the important part here):
beforeMount: function () {
var host = window.location.host
this.subdomain = host.split('.')[0]
if (this.subdomain === 'www') subdomain = host.split('.')[1]
this.fetchSiteContent()
},
methods: {
fetchSiteContent() {
if (!this.subdomain || this.subdomain === 'www') {
this.siteContentLoaded = true
this.errorLoadingSite = true
return
}
// send subdomain to the server and get back configuration object
http.get('/Site/LoadSite', { params: { site: this.subdomain } }).then((result) => {
if (result && result.data && result.data.success == true) {
this.siteContent = result.data.content
} else {
this.errorLoadingSite = true
}
this.siteContentLoaded = true
}).catch((err) => {
console.log("Error loading " + this.subdomain + "'s site", err)
this.errorLoadingSite = true
this.siteContentLoaded = false
})
},
}
I store a configuration object in json in the database for the subdomain, and return that to the client side for a matching subdomain then update the site to match the information/options in the config object.
Here is my router.vue
These domain names are supported:
mysite.com (loads main/home routes)
portal.mysite.com (loads routes specific to the portal)
x.mysite.com (loads routes that support dynamic subdomain, fetches config from server)
y.mysite.com (loads routes that support dynamic subdomain, fetches config from server)
localhost:5000 (loads main/home routes)
portal.localhost:5000 (loads routes specific to the portal)
x.localhost:5000 (loads routes that support dynamic subdomain, fetches config from server)
y.localhost:5000 (loads routes that support dynamic subdomain, fetches config from server)
import Vue from 'vue'
import VueRouter from 'vue-router'
// 3 different routes objects in routes.vue
import { portalRoutes, homeRoutes, hostedSiteRoutes } from './routes'
Vue.use(VueRouter);
function getRoutes() {
let routes;
var host = window.location.host
var subdomain = host.split('.')[0]
if (subdomain === 'www') subdomain = host.split('.')[1]
console.log("Subdomain: ", subdomain)
// check for localhost to work in dev environment
// another viable alternative is to override /etc/hosts
if (subdomain === 'mysite' || subdomain.includes('localhost')) {
routes = homeRoutes
} else if (subdomain === 'portal') {
routes = portalRoutes
} else {
routes = hostedSiteRoutes
}
return routes;
}
let router = new VueRouter({
mode: 'history',
routes: getRoutes()
})
export default router
As you can see I have 3 different set of routes, one of which is a set of routes that supports dynamic subdomains. I send a GET request to the server once i load the dynamic subdomain page and fetch a configuration object that tells the front end what that site should look like.

Resources