Next.js Protect Routes Which Use Static Site Generation - security

In Next.js you have the option of server side rendering (SSR) or static site generation (SSG). Throughout the Next.js docs and community, SSG is recommended over SSR for performance reasons.
I have a Next.js build that uses SSG throughout the application, using getStaticProps() etc. to generate the content/pages at build time by integrating with an external CMS (Prismic). I prefer this because as mentioned it gives a performance boost and also most of the codebase can then use the same data-fetching strategy (at build time).
However, some of these pages need to be protected - meaning they should only be accessed by authenticated users. We are using Auth0 to generate a JWT token and have a React context provider save the status of the user (logged in or not) after validating the token in an API call.
But, I am struck that I don't seem to have a good way to protect SSG pages with this token. The recommended way here strikes me as odd because as far as I can tell this is a client-side redirect that could be manipulated by the client (for example - the client could manipulate it's local state/context or else tamper with whatever is returned from notloggedincondition) to show the static content or otherwise short-circuit the redirect.
For reference, here is a paste of that code:
import {useEffect} from 'react'
import {useRouter} from 'next/router'
export async function getStaticProps() {
return {
props: {
hello: 'Hello World'
}
}
}
export default (props) => {
const router = useRouter()
useEffect(() => {
if(notloggedincondition) {
router.push('/login')
}
}, [])
return <h1>Rest of the page</h1>
}
Note the <h1>Rest of the page</h1> could still be accessed by manipulating the client... so I want to secure the SSG at the request/response level and do a server side redirect (if need be), or something like that.
Short of something like this, is there no way to securely protect a SSG page without having to rely on client-side routing? Do I need to SSR the content even though it is no different really from the rest of the content, save for the requirement that only authenticated users can see it?
Perhaps I am missing something obvious, but it seems to me that even with a static site there should be a way to protect it without relying on client side routing. That is to say, it does not seem intrinsic to the concept of a statically generated site that every page must be public, so I'm wondering about a way to do this in Next.js that is secure.

The best way I could find to accomplish this is via SWR fetches, statically generating a skeleton of the page with initial unprotected static data and then hydrating it with the refresh, if the refresh returns content.
This does require that you move logic gathering data for the protected page behind an API or CMS (anything which would clear your view of permissions), and converting existing routes to use API calls isn't a trivial task, so YMMV.
Important note: your redirect would still need to be client-side, but you can avoid having anything protected shown to an unauthorized user as that'd still be controlled at the server level. Since your biggest concern appears to be a user actively trying to compromise content by manipulating code, this seems to meet your risk remediation criteria (they would still be unable to access protected content).
Example page code:
import {useEffect} from 'react'
import {useRouter} from 'next/router'
import useSWR from 'swr'
export async function getStaticProps() {
return {
props: {
hello: 'Hello World'
}
}
}
export default (props) => {
const router = useRouter()
// Access the protected content via an API route,
// provide the initial unprotected static content via the initialData param
const { data } = useSWR('/api/protected-content', fetcher, { initialData: props })
useEffect(() => {
if(notloggedincondition) {
router.push('/login')
}
}, [])
return <h1>{ data.hello }</h1>
}
Then, an example API implementation at pages/api/protected-content:
export default async function ProtectedContent(req, res) {
// Get a session object based on request cookies
let session = await initUserSession(req, res);
// If a session is available, return the protected content
if (session.props.userSession) {
return res.status(200).json({hello: 'This is protected content'});
} else {
return res.status(401).send("UNAUTHENTICATED");
}
}

Next js now has middleware that can redirect request based on the context.

Encountered the same problem at work, and opted for a custom server that sits in front of Next.js.
We're using Express, but any Node framework would work really. This lets us move authentication concerns to a separate server framework, which in turn forwards the requests to Next.js.
https://nextjs.org/docs/advanced-features/custom-server

Related

How to make a private call when using SSR Nuxt?

I am writing a headless solution for a WordPress website and noticed that for one particular endpoint, I need to authenticate to pull some data that will be used publicly. But, I'm concerned that where I'm using it will expose it to the web.
In my store/index.js I use the nuxtServerInit action method to execute some actions and I pass them some objects they need to fulfill their tasks:
async nuxtServerInit ({ dispatch }, { $axios, app }) {
await dispatch('initialize', { $axios, app })
},
$axios is passed because it will be used to query the API, and app is passed to help build the options to authenticate the request.
Is this a security vulnerability in Nuxt SSR? I think it is. If so, where are the only valid areas you can use secrets? asyncData ()?
If you're using SSR, you can use the privateRuntimeConfig runtime object and pass your secret in the nuxt.config.js file
export default {
privateRuntimeConfig: {
apiSecret: process.env.API_SECRET
}
}
If you read the documentation of nuxtServerInit, you can see that
Vuex action that is called only on server-side to pre-populate the store
Since this method is server-side only, you can use apiSecret (in my example) and it should be totally fine security-wise.
PS: Keep in mind that everything beyond what is generated on the server (hence, with NodeJS or nuxtServerInit) is "public". So your VueJS's client code lifecycle hooks are public: mounted(), fetch(), asyncData() because they will be visible on your browser's devtools.
Also, should your endpoint be that critical? If so, nuxtServerInit is the good way to go. If you need to fetch some more data in a "private way", you'll need to proxy it through some backend to hide the sensitive info and retrieve only the useful public data.

How do I properly proxy a request to the Google Maps Embed API?

I'm attempting to use the Google Maps Embed API to embed a map matching an address that a user enters. Following the developer guide, I acquired an API Key to add to my API requests. I'm able to successfully pull up a map when I request it via the "src" attribute of an iframe in my React client, like so:
<iframe
...
src={`https://www.google.com/maps/embed/v1/${mode}?key=${apiKey}&q=${encodedAddressQuery}`}
>
But this leaves my API key exposed to the client.
In Google's API Key Best Practices, it's recommended that the API key be stored in an environment variable and that a proxy server be used to safeguard keys. However when I try proxying the request (code below), it seems to return the appropriate map for a split second, but then the iframe replaces the map with an "Oops! Something went wrong" message, and the browser console displays this error:
Google Maps JavaScript API error: UnauthorizedURLForClientIdMapError
https://developers.google.com/maps/documentation/javascript/error-messages#unauthorized-url-for-client-id-map-error
Your site URL to be authorized: http://127.0.0.1:3001/api/maps?parameters=[my_encoded_parameters]
I'm just developing locally at the moment, so I've tried registering http://127.0.0.1:3001/* as an authorized URL as the error documentation suggests, and I've also tried removing all website restrictions on the API key just to see if I was authorizing the wrong URL, but both of those attempts still produced the same error.
I haven't found many resources on setting up a proxy other than this Google Codelabs project, but I haven't been able to pick anything out of it to help with this issue.
Here's the code I'm using in my React front end:
<iframe
...
src={`http://127.0.0.1:3001/api/maps?parameters=${encodedAddressQuery}`}
>
And in my Express Node.js back end:
router.get('/api/maps', async (req: Request, res: Response) => {
try {
const parameters = req.query.parameters;
const map = await MapDataAccessObject.getOne(parameters);
return res.status(OK).send(map.data);
} catch (err) {
...
}
});
export class MapDataAccessObject {
public static async getOne(parameters: string) {
const apiUrl = this.getMapsEmbedUrl(parameters);
const map = await axios.get(apiUrl);
return map;
}
private static getMapsEmbedUrl(parameters: string) {
const encodedParams = encodeURI(parameters);
return `https://www.google.com/maps/embed/v1/place?key=${process.env.MAPS_API_KEY}&q=${encodedParams}`;
};
}
I'm running my React front end and my Node.js back end in Docker containers.
My best guess is that I'm missing a request header that the Maps API is expecting, but I haven't found any documentation on that, or maybe I'm misunderstanding something more fundamental. Any guidance would be very much appreciated.
There is no way to protect your google maps key on a website from being "spied on" because it is always public. The only way to protect your key from foreign usage is to restrict it's access to only the allowed domain/ip-addresses - so if it is used from someone else, it will not work or take anything from your credits (google will show an error message).
https://developers.google.com/maps/documentation/javascript/get-api-key

Angular 2 (4/5) check if user authenticated, best practices

I am new to Angular and implementing authentication for users.
Most of suggestions on the web propose to save session/username in local storage and when user is back to the app, check local storage in order to display proper navigation (in my navbar I have different nav buttons for private and public views).
However, I found this solution having some drawbacks. For example, if session on server expired, or local storage was manually added, then when app will be initialising, it will show wrong buttons in navbar.
After that I came to solution - to use service before showing navbar buttons, to send request to server to verify if user from local storage with his session is currently active. And only after that I will display nav buttons.
Here comes a question: is it the most efficient way to proof check if user from local storage is logged in and session is active?
There is another way, which I was thinking, but didn't find solution.
Since my angular webapp and nodejs server are located in two different places, is it possible for the webapp to check authentication status (make a request from webapp server to my nodejs server) when index.html is requested and respond with pre-rendered navbar and user status (logged in or not)?
Thanks.
P.S. I am using PassportJS and Express on server side.
The best practise is to use AuthGuard and implement CanActivate to check whether user can view a particular part of the application. Also an authentication service is used normally to let the user login to system and gain an access token.
This access token is then used as Authorisation-Header on each request to server (this is where they will be in sync).
You will need to check for JWT/or any other type token on load which contains user information plus session time out.
If the token is invalid you simply redirect the user to login, otherwise it will allow user to go where they wanted to.
A practical example can be found here.
To have navbar showing different elements for authenticated and non-authenticated users, one of possible solutions will be
To use some "/auth-check" request in authentication.service.ts, which will have trigger every time an event of the result of checking authorisation of current user
...
interface ShareObj { [id: string]: any; }
...
currentUserId: ShareObj = {};
currentUserUsername: ShareObj = {};
public authenticatedBehavior = new ReplaySubject(1);
authCheck(): any {
return this.http.get('/api/auth-check')
.map((resp: any) => {
if (resp.authanticated) {
this.currentUserId['global'] = resp.user.id;
this.currentUserUsername['global'] = resp.user.username;
this.authenticatedBehavior.next(true);
} else {
this.authenticatedBehavior.next(false);
this.currentUserId['global'] = null;
this.currentUserUsername['global'] = null;
}
return resp;
})
.catch(e => {
this.authenticatedBehavior.next(false);
this.currentUserId['global'] = null;
this.currentUserUsername['global'] = null;
});
}
So, in navbar.component.ts there should be a listener for this event:
ngOnInit() {
this.authService.authenticatedBehavior
.subscribe(
data => {
// do change of UI of navbar depending if user logged in or not
}
);
}
To have error-iterceptor.ts file, where you should "catch" all failed requests and check them for Unauthorised response (401). If you catch such response, do authCheck() in authentication.service.ts to make sure that session of current user expired and notify all components which listen for authenticatedBehavior

How to implement Google API with React, Redux and Webpack

I'm trying to get google calendar events into my React Redux app.
I've tried using googleapis and google-auth-library but webpack is throwing errors because googleapis was built to run server side and bundle.js is referenced from client.
So I've read a few forums about these errors and they all point to using Google's js client library instead.
I understand how to implement this in a java or php app (I'm old... 35 ;) but I'm new to React Redux and I'm looking for the best way to implement this.
I'm trying to fetch the events from my calendar in my actions.js. I tried including <script src="https://apis.google.com/js/api.js"></script> in my html header and then using gapi.load() from actions.js. I also tried creating a api.js file and referencing that with require('./api'). I also tried to use the cli commands from the Node.js Quickstart guide to get an access_token and then just use axios to call Google API directly but I'm getting a 403. I'm thinking I'm just not providing the proper headers but that wouldn't be best practice anyway.
My question is basically how do I reference Google's js client library from my actions.js file while adhering to Redux standards?
You're on the right track by including the official Google client API in the HTML header. It's less than ideal -- it would be nice if Google provided the (browser) client API as an npm module that you could import. But they don't (that I see), so I think what you're doing is fine.
Then there's the question of "how do I use it in a way that's React/Redux friendly?" Redux is a mechanism for managing the state of your application. The Google API is not part of your application (though what you do with it may inform the state of your application).
It's easy to verify that you have access to the Google API: you can just make a call from the componentDidMount method of one of your components, and do a console log:
class MyComp extends React.Component {
componentDidMount() {
// this is taken directly from Google documentation:
// https://developers.google.com/api-client-library/javascript/start/start-js
function start() {
// 2. Initialize the JavaScript client library.
gapi.client.init({
'apiKey': 'YOUR_API_KEY',
// clientId and scope are optional if auth is not required.
'clientId': 'YOUR_WEB_CLIENT_ID.apps.googleusercontent.com',
'scope': 'profile',
}).then(function() {
// 3. Initialize and make the API request.
return gapi.client.request({
'path': 'https://people.googleapis.com/v1/people/me',
})
}).then(function(response) {
console.log(response.result);
}, function(reason) {
console.log('Error: ' + reason.result.error.message);
});
};
// 1. Load the JavaScript client library.
gapi.load('client', start);
},
}
If you don't see what you expect on the console, somehow gapi isn't getting loaded as you expect. If that happens, you'll have a more specific question you can ask!
If you do get a response, you now know how to call GAPI...but then how to make use of it in a Redux-friendly way?
When you make a GAPI call, you probably want to modify your application's state in some way (otherwise why would you be doing it?) For example, you might invoke the auth flow, and when GAPI returns success, your application state now has loggedIn: true or similar (possibly with lots of other state changes). Where you make the GAPI call is up to you. If you want to do it when the component loads, you should do it in componentDidMount. You also may commonly be making the GAPI call in response to a user action, such as clicking on a button.
So the typical flow would be something like this:
// either in componentDidMount, or a control handler, usually:
someGapiCall()
.then(result => {
this.props.onGapiThing(result.whatever)
})
Where this.props.onGapiThing is a function that dispatches an appropriate action, which modifies your application state.
I hope this overview helps...feel free to follow up with more specific questions.
Can you try this library which I used to load external libraries and modules in my React app when I couldn't find a NPM module for it:
https://github.com/ded/script.js/
So your code will be like this:
import $script from 'scriptjs';
$script('https://apis.google.com/js/api.js', function () {
//Put your google api functions here as callback
});
I'm going to answer my own question despite some very good correct answers.
#MattYao answered my actual question of how to get a js script available for reference in my actions.js file.
#Ethan Brown gave a very detailed answer that showed some excellent flow possibilities.
#realseanp changed the scope but a valid answer.
I tried all of the above and they worked.
So I'm not sure what I was doing wrong but I was finally able to access the gapi object from actions.js by just adding <script src="https://apis.google.com/js/api.js"></script> to my index head.
I'm using pug so it looks like this:
doctype
html
head
title MyTitle
link(rel='stylesheet' href='/static/css/main.css')
link(rel='stylesheet' href='/static/css/react-big-calendar.css')
script(src='https://apis.google.com/js/api.js' type='text/javascript')
body
div(id='app')
script(src='/static/bundle.js' type='text/javascript')
Here is my component file:
import React from 'react'
import BigCalendar from 'react-big-calendar';
import moment from 'moment';
import { connect } from 'react-redux'
import { fetchEvents } from '../actions/actions'
BigCalendar.momentLocalizer(moment);
#connect((store) => {
return {
events: store.events.events
}
})
export default class Calendar extends React.Component
{
componentWillMount()
{
this.props.dispatch(fetchEvents())
}
render()
{
return (
<div>
<BigCalendar
events={this.props.events}
startAccessor='startDate'
endAccessor='endDate'
style={{height: 800}}
/>
</div>
)
}
}
And then I was able to get this working in my actions.js file
export function fetchEvents()
{
return (dispatch) =>
{
function start()
{
// 2. Initialize the JavaScript client library.
gapi.client.init({
'apiKey': API_KEY,
// clientId and scope are optional if auth is not required.
'clientId': CLIENT_ID,
'scope': 'profile',
}).then(function() {
// 3. Initialize and make the API request.
return gapi.client.request({
'path': 'https://www.googleapis.com/calendar/v3/calendars/MY_EMAIL#gmail.com/events?timeMax=2017-06-03T23:00:00Z&timeMin=2017-04-30T00:00:00Z',
})
}).then( (response) => {
let events = response.result.items
dispatch({
type: 'FETCH_EVENTS_FULFILLED',
payload: events
})
}, function(reason) {
console.log(reason);
});
};
// 1. Load the JavaScript client library.
gapi.load('client', start)
}}
I had to make my calendar public to access it this way. So now I'm going to work on the oauth2 stuff :/
I would load all the google stuff in my index file before i loaded my webpack bundle (Option 1) . Then I would use redux sagas to call the google apis. Loading the google code before your webpack bundle will ensure everything is ready to go when you call the api from the saga
Try this package.
It looks like it is updated.
https://github.com/QuodAI/tutorial-react-google-api-login

redux onEnter dispatch could work on server rendering?

i have a working client side app which uses redux-router.
i dispatch the initial state of user page from my api.
my routes file:
export default function ({ dispatch, getState }) {
function getUser(nextState, replaceState) {
dispatch(getUserData(nextState.params.id));
}
return (
<Route path="/" component={App}>
<Route path="user/:id" component={User} onEnter={getUser}/>
<Route path="*" component={NoMatch}/>
</Route>
);
}
}
so on the client it works great.
i would like to get my markup rendered-to-string after the getUserData dispatch is back with data.
thats my server matching and rendering (from the official server-rendering example):
app.use((req, res) => {
const store = reduxReactRouter({ routes, createHistory: createMemoryHistory })(createStore)(reducer);
const query = qs.stringify(req.query);
const url = req.path + (query.length ? '?' + query : '');
store.dispatch(match(url, (error, redirectLocation, routerState) => {
if (error) {
console.error('Router error:', error);
res.status(500).send(error.message);
} else if (redirectLocation) {
res.redirect(302, redirectLocation.pathname + redirectLocation.search);
} else if (!routerState) {
res.status(400).send('Not Found');
} else {
res.status(200).send(getMarkup(store));
}
}));
});
is it possible to get this working while using onEnter?
or it only fires on the browser?
would appreciate any help guys!
thanks
I don't understand - I assume getUser is async, rendering is always synchronous in React. You'd have to get the data ahead of time. Relying on onEnter is a nice idea but it wouldn't work.
Basically...
First: you need to expose your methods that grab data so they can be called on the server outside of flux. The action/dispatcher/store doesn't work well on request/response cylce and make sure you have a file with API calls.
Second: put all your routes in a JSON file, React Router (or whatever router) reads that JSON and adds the routes in a loop with their handler, the server-side code reads those routes and adds express routes for the same routes pointing to the method, each route in the JSON also contains a reference to the data the component needs to a real initial render (what async calls). You create an empty copy of the initial state object, each handler on the express side performs all the calls to the relevant API methods (from the JSON) and when a Promise.all resolves on "getting the data and filling it in the state" resolves you render,
The render now contains the relevant data.
Third, you need to figure out how to pass state to the server, how the user is logged in, what they can do and so on - I recommend having a second server act as a cache since rendering is CPU bound in React so you need caching. We cache based on route, user status, device and a few other basic things.
Fourth, you point your non-rendering server (the caching one) to the rendering one and forward requests, hopefully requests should hit the cache, you need to have the non-rendering server "fall back" to client-side rendering only if the rendering server fails. This also lets you "hot swap" the rendering server on deploy.
You also need some way to deliver the JavaScript itself from the rendering server (so there is a single source of truth) that shouldn't be too big of an issue.
There are a lot more delicate parts in the flow - but that's pretty much how we do it and we have sub-second full renders which I think is nice. Went down from 5 seconds when our site was mostly Angular and rendered on the client.

Resources