I'm having trouble authenticating a user in my NodeJS / React application, I know that the question is not very much related to programming, but someone can help me. Come on the question.
I created a work schedule management application for the company's employees, the API I made in Node JS and the Front with React. As employees are in Rio and São Paulo, they would have to have access via the web. So we set up a server with a fixed IP for the access of Rio people. We installed Ubuntu 20.04 with Nginx, NodeJs and MongoDB.
So far so good, I installed the application and on the server and it works perfectly, but when accessed by another machine, inside and outside the network, there is an authentication error.
I've looked for how to pass the header and I've read a lot of articles to try to configure Nginx to accept this header that is passed by the Json Web Token.
In the application I pass a Header that I called x-auth-token, which I can check on the browser when I'm on the server, but I can't find this Header when I'm on another machine on the network or outside it. So I believe that is why the authentication error, I just don't know how to solve this.
I created this location in Nginx so that when they enter the server, run the application right away.
I changed this code a little and I will update it here too
location / {
proxy_pass https://xxx.xxx.x.xxx:3000;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass_header X-Auth-Token;
}
The xxx is not to put the external IP of the server.
Related
Looking at the following scenario, I want to know if this can be considered a good practice in terms of architecture:
A React application that uses NextJS framework to render on the server. As the data of the application changes often, it uses Server-side Rendering ("SSR" or "Dynamic Rendering"). In terms of code, it fetches data on the getServerSideProps() function. Meaning it will be executed by the server on every request.
In the same server instance, there is an API application running in parallel (it can be a NodeJS API, Python Flask app, ...). This app is responsible to query a database, prepare models and apply any transformation to data. The API can be accessed from the outside.
My question is: How can NextJS communicate to the API app internally? Is a correct approach for it to send requests via a localhost port? If not, is there an alternative that doesn't imply NextJS to send external HTTP requests back to same server itself?
One of the key requirements is that each app must remain independent. They are currently running on the same server, but they come from different code-repositories and each has its own SDLC and release process. They have their own domain (URL) and in the future they might live on different server instances.
I know that in NextJS you can use libraries such as Prisma to query a database. But that is not an option. Data modeling is managed by the API and I want to avoid duplication of efforts. Also, once the NextJS is rendered on the client side, React will continue calling the API app via normal HTTP requests. That is for keeping a dynamic front-end experience.
This is very common scenario when frontend application running independent from backend. Reverse proxy usually help us.
Here are simple way I would like to suggest you to use to achieve(and also this is one of the best way)
Use different port for your frontend and backend application
All api should start with specific url like /api and your frontend
route must not start with /api
Use a web server( I mostly use Nginx that help
me in case of virtual host, reverse proxy, load balancing and also
server static content)
So in your Nginx config file, add/update following location/route
## for backend or api and assuming backend is running on 3000 port on same server
location /api {
proxy_pass http://localhost:3000;
## Following lines required if you are using WebSocket,
## I usually add even not using WebSocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
## for frontend and assuming frontend is running on 5000 port on same server
location / {
proxy_pass http://localhost:5000;
}
Objective: Give my Django App (with Python backend [written] and react/redux/js frontend [written]) a Smartsheet API OAuth page that redirects users off the main website until auth is done, and then back to website (that utilizes Smartsheet API in features)
Crux: A hunch said that the OAuth should be in node.js to match front end more, and I found a working sample code for how to do Smartsheet OAuth in Node. It worked great on its own! Then when I tried to integrate this node.js page into Django, I got errors from which ever server I set up second, that there is already a server on that given (12.0.0.1:{port}) local url. Maybe OAuth should be written in python instead, but I couldn't find sample code for it, so it would be great if I could keep it in Node.
Question- Is there a way to deploy components from both Node and Django to the same server/domain? It's weird to have users go to one site, finish their oauth, just to be pushed to a totally separate domain. This may also pose security risks.
My attempt-
I thought I would just create a simple landing page and then after being logged in, shoot the user forward on the website (on a redirect url). This is what the Django urls.py would look like:
from django.urls import path
from . import views
urlpatterns = [
path('', views.oauth ), //Views.oauth is fairy blank, and I wanted my Node.JS server
//to listen at that hostname:port
path('loggedin/', views.index ), //when oauth ended, I wanted it to send the user here
]
This attempt made these errors:
Django Error
Node Error
Thanks for any ideas to my inquiry!
Microservices are basically small components which are interconnected by REST or any kind of api and what you are talking about IS microservice architecture.
Now, you can deploy the Django on some port lets say 8090 and NodeJS in another port lets say 8080.
Now, to connect them you need to have some kind of reverse proxy easiest would be nginx.
So, the rules will be like this.
If the url is host/api forward traffic to NodeJS 127.0.0.1:8080 port.
Otherwise forward traffic to 127.0.0.1:8090.
A example would be this question: NGINX - Reverse proxy multiple API on different ports
server {
{
listen 443;
server_name localhost;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
location /api/orders {
proxy_pass https://localhost:5000;
}
location /api/customers {
proxy_pass https://localhost:4000;
}
}
So the rule says that if the location is /api/orders go to localhost:5000 and if it's /api/customers go to other one, the 4000 port one.
Now, you can research about reverse proxy and probably come up with your own rule.
I am trying to scale my Socket.io Node.js server horizontally using Cloud Foundry (on IBM Cloud).
As of now, my manifest.yml for cf looks like this:
applications:
- name: chat-app-server
memory: 512M
instances: 2
buildpacks:
- nginx_buildpack
This way the deployment goes through, but of course the socket connections between client and server fail because the connection is not sticky.
The official Socket.io documentation gives an example for using NginX for using multiple nodes.
When using a custom nginx.conf file using the Socket.io template I am missing some information (highlighted with ???).
events { worker_connections 1024; }
http {
server {
listen {{port}};
server_name ???;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
upstream nodes {
# enable sticky session based on IP
ip_hash;
server ???:???;
server ???:???;
}
}
I've tried to find out where cloud foundry runs the two instances specified in the manifest.yml file with no luck.
How do I get the required server addresses/ports from cloud foundry?
Is there a way to obtain this information dynamically from CF?
I am deploying my application using cf push.
I haven't used Socket.IO before, so I may be off base, but from a quick read of the docs, it seems like things should just work.
Two points from the docs:
a.) When using WebSockets, this is a non-issue. Cloud Foundry fully supports WebSockets. Hopefully, most of your clients can do that.
b.) When falling back to long polling, you need sticky sessions. Cloud Foundry supports sticky sessions out-of-the-box, so again, this should just work. There is one caveat though regarding CF's support of sticky sessions, it expects the session cookie name to be JSESSIONID.
Again, I'm not super familiar with Socket.IO, but I suspect it's probably using a different session cookie name by default (most things outside of Java do). You just need to change the session cookie name to JSESSIONID and sticky sessions should work.
TIP: you can check the session cookie name by looking at your cookies in your browser's dev tools.
Final note. You don't need Nginx here at all. Gorouter, which is Cloud Foundry's routing layer, will handle the sticky session support for you.
I'm a bit new to node/react.
I have an API/express node app and in that app I have a react app. The react app has axios.get commands and other API calls. The react app finds the API calls I do and forwards them to the proxy I setup in the package.json of the react app. In dev the proxy looked like this: "proxy": "http://localhost:3003/" but now that I'm going into production I'm trying to change this proxy to be the URL I'm hosting my node express app in "proxy": "http://168.235.83.194:83/"
When I moved my project to production I made port 83 the API node app and I made port 84 the react app (with nginx). For whatever reason though, my react app just doesn't know how to do the API requests to the node app.. I'm getting blank data
After googling I come to realize, the 'proxy' setting only applies to requests made to the development server. Normally in production you have a server that gives the initial page html and also serves api requests. So requests to /api/foo naturally work; you don't need to specify a host.
This is the part I'm trying to figure out. If someone can tell me how to setup my app so that /api/foo naturally works that would be greatly appreciated.
I took a stab at trying to set that up properly. This is probably a complete failure in terms of an approach but it's late and I'm gonna fall asleep on this problem.. I'm supposed to have nginx handle serving both static html and requests in one statement file? I have this so far but I can be way off here...
server {
listen 84;
server_name 168.235.83.194;
root /home/el8le/workspace/notes/client/build;
index index.html index.htm;
location / {
}
location /api{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://168.235.83.194:83/; //I have nginx hosting my API app on this port. Not even sure if this should be like this?
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}`
Also, I'm actually hosting on those ip addresses if you want to get a better sense of where I am at:
http://168.235.83.194:84
http://168.235.83.194:83/customers
You will have to supply the actual API URL while making data request. Dev server is able to proxy to a different API URL. So, if the app loads at http://localhost:83 using DEV Server, any data request like /api/customers will go to http://localhost:83/api/customers and dev proxy server will pipe it to http://localhost:84/api/customers.
But in production, when you make the same request it will use base address of your app and try to get the data from http://PRODUCTION_SERVER:83/api/customers.
Correct way to handle this would be to use absolute URL instead of relative URL. And as production and development will have different Base URLs, maintain them in a config variable and then append specific api address to this base address, something like : ${BASE_URL}/api/customers, where BASE_URL will be http://localhost:84 in DEV and http://PRODUCTION_SERVER:84.
The code that I am using right now is :
<!--Socket.io -->
<script src='http://172.16.17.185:3000/socket.io/socket.io.js'></script>
Where, 172.16.17.185 is the IP address on our LAN on which my server runs and port 3000 is used for socket.io communication.
The examples on the socket.io website use localhost in place of the server IP address. I started with this. Using localhost, my page loaded fine on the server, but no other PC on the LAN was able to create the socket object (as it was trying to look for socket.io on its localhost, which wasnt there).
So, I changed localhost to the IP address of the server, and everything worked fine.
Until, I tried accessing it from outside. Since this IP address is valid only on the LAN, it doesnt work like it should on the internet.
Is there a workaround for this problem ? or do I need to find the public IP of the server and use it ? Does this create any security vulnerabilities for the server as I dont have authorization ?
So far: http://www.ipsacademy.org/weather
Server code in web-server.js
If you want to access the socket.io server from outside you LAN, you must use public IP (and possibly avoid the use of port 3000).
In a production environment you must use a fronted proxy (like nginx) to translate requests to your socket.io server to the private LAN address.
Following example in the nginx blog you can use something like this:
location /socket.io/ {
proxy_pass http://172.16.17.185:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
This will not cause any security problem different from any publicly accessible service on Internet.
Looking at your code you are using separate http instances for serving page and socket.io. If you change that to use the same http server you can use relative URL for socket.io connection.