Hide secret keys in network payload - node.js

I have a Vue Storefront which, out of the box, exists of a Nuxt.js front-end and a Express.js back-end.
In this project I created a custom Server Middleware (which is the Express.js part) that has an Axios call in it. My entire Vue Storefront project is hosted and deployed on a server where I also store the secret keys for the Axios call as eviorment variables. Whenever I request data via the Axios call on the deployed website, I can still see my secret keys in payload in the browser console.
Can these keys be hidden? Since the call is done in the VSF Server Middleware (which is a Express.js server under the hood) and my secret keys are defined on the server too... Not in a .ENV file.
The official docs also state the following about the server middleware:
Securely store credentials on the server without exposing them to
theend-users of your application,
I also have Server Side Rendering enabled, if this has any effect on this.

As explained in my previous answer here, you cannot really hide anything on a client side app.
The fact that you do have ssr: true and target: 'server' enables the usage of a serverMiddleware. Nuxt is also an Express server, so you could technically still totally hide stuff, the configuration of this one is detailed here. Please pay attention to the whole answer (especially the gotcha at the end).
The TDLR is as mentioned above: you'll need some kind of proxy to hide that, so you could do that:
directly with Nuxt2 but it's kinda tricky and hard to work with overall, on top of paying a whole Node.js server and a possible mistake exposing those tokens at some point
get another Node.js server to properly separate the concern and use that one as a proxy, it can be pretty light (no need for a beefy configuration), not as expensive price-wise
a serverless function could be the best idea here because it's light, cheap and you don't need to manage anything (send your query there, it will proxy the request with the secret token) but it can be a bit annoying regarding cold starts

Related

How can i hide API Key in an Electron JS Project?

i'm working on an electron js app & i need to connect it to an API
knowing that the source code of an electron-js app is visible it's a huge security risk to leave the API key there !
how can i solve this problem ?
Instead of having the electron app (whether from the page or from the main process) make a request to the API directly, you can have it make a request to your own server instead - then, your server can make the request to the API, so that the key is only visible to your server, and isn't exposed publicly anywhere.
This will also let you gate requests from clients - if, for example, the credentials a client sends don't match what you need, or if they make too many requests in too short a time, you can cut them off.
You can't. If API key is shared, it's probably designed to be used in the backend.
The solution for that is to create a backend API for proxying API calls. Such proxying API should utilize authentication, so each user must send individual credentials.
How about obfuscate it using a utility like js-beautify.

How to forward user sessions in Next.js + Graphql decoupled architecture?

I'm building a next.js project and while I usually would just use the "Custom Express Server" method to implement my graphql API (using apollo-server-express), I thought that it might be a good idea if I decoupled the next.js project from the graphql API so that each of the servers are hosted on different machines.
But usually I would implement any session-related logic in the graphql API, using something like graphql-passport; I figured that's good practice because if I ever choose to add another frontend (maybe a mobile app or something) they can share the same session logic. But given that I'm server side rendering content with next.js, how do I forward the user's session info to the graphql server? Because the next.js server shouldn't have to redo authentication, right?
Let me know if there are any flaws in the architecture too, I'm kind of new to this.
Using the Next server to run the GraphQL service is certainly not a good idea, so yes, do separate the two.
Letting the Next server SSR-render pages with user specific content using the users session is probably not a good idea either, unless you have some specific use case that requires the served HTML pages to have the user specific data in them already. The reasons for this are:
SSR rendering will require lots of server side computations since all pages always will have to be rerendered.
NextJS is moving away (since v9.3) from the getInitialPros() way of doing things towards using getStaticProps() to generate a page that is common for all users and which can load its session dependent stuff straight from the GraphQL API once it is displayed on the client device.
This approach will generally have higher performance and scale much better.
Should you really want to go the "SSR with user session data" route you start in the getServerSideProps(context) method, where context.req is the actual request which will have all your session data in it (cookies or headers).
This session data you can then extract from the req and pass on to the GraphQL server requests that require authentication.

Accessing Header values in React

I am new to React, and I believe this is a unique situation.
So, I am creating a React Website create-react-app. When I run npm start, it will run on localhost:3000.
My next requirement is to have Authentication & Authorization (AA). It is already created by another group.
What they give me is a small proxy website. Let's say it runs on localhost:4000.
When I want to go to localhost:3000/Home, I will type localhost:4000/Home.
localhost:4000 will take care of AA, and if everything works out, it will forward to localhost:3000/Home.
The login information is added to Http-Only & Secure cookie by localhost:4000.
My question is how React can access it?
One of the solution is to use react-cookie. But it cannot access Http-Only & Secure cookie.
The workaround is to create another proxy. React with Express
So, I create another proxy at localhost:5000.
The workflow becomes localhost:4000 will handle AA. The result is forwarded to localhost:5000. localhost:5000 will decide if it should forward to localhost:3000 or redirect to error page. It works if my original website doesn't need to use those result cookies.
But React website needs to do some POST calls to a separate website at localhost:9000. React needs to pass the same AA result cookies to that website. If not, it will ask the user to login again.
I cannot change the cookies properties to non Http-Only, or copy them to another place.
So, the question is how to pass those cookies to localhost:9000?
Am I on completely wrong track because I couldn't find any similar posts online.
In short, can React access the header where some information like username is set, and cookies where actual tokens are stored? If not, what's the standard workaround?
Thanks so much.
Trying to work around HTTP-only authorization cookies is a huge security risk, because in this case they could be stolen by third party scripts. Using proxies to bypass this limitation is certainly an inappropriate solution.
You are working in an environment with several services (localhost 3000, 4000, 9000) that share one authentication system. In this case your options are:
1) Make your server (the 3000 one) communicate with other servers on behalf of the client. If your client needs to POST to 9000, let it POST to a special endpoint of 3000. Your server will then query 9000 and return to the client with answer.
In this case you need to establish server-server authentication which is much simpler that client-server. But you have to handle authorization separately - if your clients may have different permissions.
2) If the 4000 server is only source of AAA, make it a single sign-on server then. There are a lot of solutions of this type. OAuth2 and JSON web tokens (JWT) are hot topics nowadays, with advent of microservices.
You could also talk to the team that developed localhost:4000 and ask what their idea was.

Single Page Web Apps, CORS and security concerns

The situation
I am writing a Single-Page-Web App (using Angular). Lets call it SPA
Another team-mate is writing some APIs (using Node.js). Lets call is Server
My SPA is to Login to the Server using login/passwd, and do some stuff
My team-mate has decided to use cookies to track the session. Hence, upon a successful login, a http-only cookie is to be set in the web-browser the SPA is loaded in.
The problem
If we put the SPA in the Server's public_html dir, all works well. This, however, makes the SPA as a part of the API code. This breaks our build process, since every version upgrade to the SPA now requires upgrading the API too.
If we host the SPA in a seperate webserver that only serves the static SPA files, I run into CORS issues. Since the SPA comes from a different origin than the APIs it is trying to access, the browser blocks the ajax calls. To overcome this, we will have to set Access-Control-Allow-Origin on the server side appropriately. I also understand that Access-Control-Allow-Credentials:true needs to be set, to instruct the browser to set/send the cookies.
Possible solutions
We create a build process which does a git-pull to the Server's public_html dir every time the SPA gets upgraded. I am trying to avoid this to keep the client and server upgrades separate.
We create a proxy kind of situation, where the Server doesnt store the SPA files, but collects them on-demand from another server that hosts the SPA files. In this case, the web-browser will see the SPA files and subsequent ajax calls from the same origin.
We code the server to set Access-Control-Allow-Origin:* in its responses. Firstly, this is too open and looks insecure. Is it really insecure, or is it just my perception? Also, since we are setting Access-Control-Allow-Credentials:true, Chrome complains Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.. To overcome this, we will have to put exact origins (perhaps using a regex) in the Access-Control-Allow-Origin. This may seriously restrict us from distributing our SPA to users in unknown domains.
For a Server API designer, is Cookie based authentication the recommended way to handle Authentication for SPAs? OAuth2.0 and JWT based Authentication seems to suggest that Cookies based Authentication is not right for SPAs. Any pros/cons?
Kindly comment on the above options, or suggest any others that you may have used. Thanks in advance.
I think the issue is that your terminology is confusing. API is not an server, its an application that lives on a machine that can also be a server. If you make a NodeJS API, I suggest you use a Nginx server as a reverse proxy before it. Assuming you want the Nginx server, API and SPA files all on same machine, you can deploy your API to a directory and your SPA to another directory and have Nginx route the requests accordingly.
So I believe solution 2 is way to go. From there you can easily scale by increasing number of instances(if you use AWS) and load balance them or separate your API into its own application server.
As far as authentication. I have always preferred using Header Authorization with access tokens over cookies for SPA or API request. The idea that each request is self contained and does not require a persistent string kept on the browser is more appealing to me, though you can save access token via local storage.
I would go with either solution 2 or 3.
2: you could set both (webpage and API) on the same server (or use reverse proxies) so that from an outside perspective they share the same origins.
3: in the case of an API, the same origin policy becomes less important. The API is to be consumed by clients that are not part of your web application anyways, no?
I would not see any issue in setting a more lax allow origin header. And by more lax I don't mean wildcard, just add the origin of your webpage. Why do you want to wildcard it?

Single page applications, http or websockets, is connect/express done for?

This is a question involving single page web apps and my question is in bold.
WARNING:
I'm hardly an expert on this subject and please correct me if I'm wrong in part of my understanding of how I think HTTP and WebSockets work.
My understanding of how HTTP restful APIs work is that they are stateless. We use tools like connect.session() to interject some type of state into our apps at a higher level. Since every single request is new, we need a way to re-identify ourself to the server, so we create a unique token that gets sent back and forth.
Connect's session middleware solves this for us in a pretty cool way. Drop it into your middleware stack and you have awesome-sauce sessions attached to each request for your entire application. Sprinkle in some handshaking and you can pass that session info to socket.io fairly easily, even more awesome. Use a RedisStore to hold the info to decouple it from your connect/express app and it's even more awesome. We're talking double rainbow awesome here.
So right now you could in theory have a single page application that doesn't depend on connect/sessions because you don't need more than 1 session (initial handshake) when it comes to dealing with websockets. socket.io already gives you easy access to this sessionId, problem solved.
Instead of this authentication work flow:
Get the email and password from a post request.
Query your DB of choice by email to get their password hash.
Compare the hashes.
Redirect to "OK!" or "NOPE!".
If OK, store the session info and let connect.session() handle the rest for the most part.
It now becomes:
Listen for a login event.
Get the email and password from the event callback.
Query your DB of choice by email and get their password hash.
Compare the hashes.
Emit an "OK!" or "NOPE!" event.
If OK, do some stuff I'm not going to think of right now but the same effect should be possible?
What else do we benefit from by using connect? Here's a list of what I commonly use:
logger for dev mode
favicon
bodyparser
static server
passport (an authentication library that depends on connect/express, similar to what everyauth offers)
The code that loads the initial single page app would handle setting up a static server and favicon. Something like passport might be more tricky to implement but certainly not impossible. Everything else that I listed doesn't matter, you could easily implement your own debug logger for websockets.
Right now is there really anything stopping us from having a single http based index.html file that encapsulates a websocket connection and doesn't depend on connect at all? Would socket.io really be able to make that type of application architecture work without setting up your own HTTP restful API if you wanted a single page app while offering cross brower support through its auto-magical fallbacks?
The only real downside at this point is caching results on the client right? Couldn't you incorporate local storage for that? I think creating indexable/crawlable content pages for search engines wouldn't be THAT big of a deal -- you would basically create a tool that creates static html files from your persistent database right?
Check out Derby and SocketStream.
I think what you're asking for is if it is plausible (using socket.io) to create a website that is a single static page with dynamically changing content.
The answer is "yes", it can work. Several node.js web frameworks already do this although I don't know of any that use socket.io.

Resources