I've been setting up my Raspberry Pi as a web server and everything worked fine until yesterday, when every node process started timing out when doing external requests.
For example:
npm i -g n (The node version manager) times out when fetching the ideal tree, and my express applications can't make calls to external APIs, but requests to the server work perfectly. When doing the same outgoing calls with curl, everything works fine. Only Node.js processes seem to have this problem.
Any tips on where to look for the problem? Is it firewall related?
Not working:
Expected npm i -g n to install the package, instead it times out. .
All external calls made by a node process time out with reason connect ETIMEDOUT
Working:
curl to external resources, so connection is acctive
Requests to an express server from clients work as usual
Responses from the express server work if there are no external api calls involved
cloudflare tunnels and access for ssh work.
Related
So, Iv'e been trying to setup a small example of a server with 2 factor authentication features. I'm using nuxt as the frontend and flask as the backend.
While developing locally (using npm run dev) I was able to get the chain of communication to work:
The webpage sends a request to server/<some_request>/<some_param>
The proxy module redirects it to http://localhost:5000/<some_request>/<some_param>
The request is sent by the axios module
flask receives the request, processes it and answers.
When trying to deploy this application to a dreamhost server, I used npm run build and npm run generate to serve the website statically. I was able to receive my webpage when browsing.
But when trying to login, the chain described above broke, and requests to server/<some_request>/<some_param> were answered with 404. In the server's command line I saw that flask didn't receive any request, so I assume that this is some issue with the proxy module.
I'm not really sure how to debug this problem, so any help or ideas will be appreciated.
Okay, so I got everything working, and here are my conclusions:
The proxy module, makes the redirection in the client side, meaning that I would be trying to access my own machine when redirecting to localhost:5000, rather than to the server.
The proxy module can't be used when using npm run generate (there's a warning that says it's disabled).
Since I wanted to redirect to flask from the client side (browser), I couldn't just run it as is. I had to register another subdomain and use Passenger to deploy my app (A guide to enabling passenger, getting started with python in passenger and A great guide to deploying flask over passenger).
I recently created an e-commerce site using express and the node server worked fine on my local machine. When I uploaded it to my VPS and tried running it using pm2 and with nodemon, the server stopped responding to requests after few minutes, even when the number of requests is low.
Although all the internal functionalities other than request handling were working well. I used a lot of console.log()s in my codes, is this problem due to the excessive use of console.log()?
So me and my friend are working on a MERN Stack app, I am working on backend(Node.js) and he is working on Frontend(React.js). We are from different places, My Question is how he can access my localhost server, So as to hit on my APIs.
Provide me with all the possible solutions so that my APIs are always available to him.
You need something like this: https://ngrok.com/
Ngrok is a tool that allows you to securely open a tunnel to your local machine while ngrok is running.
Its has a free plan, or you can pay for extra features like setting a custom domain
You can install ngrok as a global npm package with:
npm i -g ngrok
And then once your server is running locally, you can start ngrok in another terminal pane/window/session and point it to the port your server is running on, below we assume the port is ‘3000’:
ngrok http 3000
This will open the tunnel, and print a url you can send to your friend to make requests against. Requests made to the url will be proxied to your localhost at the specified port. It supports HTTPS as well.
Hi Im running my frontend (create-react-app) and backend server (express.js) on different ports but on the same host.
For example: frontend is on 127.0.0.1:3000 and backend on 127.0.0.1:3003.
in my package.json:
{...
"proxy": "http://localhost:3003",
...}
Everything worked fine till I didn't migrate my app to remote server.
My app started to refresh unexpectedly when I'm trying to send http request (axios) to server (probably due to bad proxy settings).
So I have frontend app running on 35.125.320:10:3000 and server is running on 35.125.320:10:3003. My http requests was unexpectedly cancelled. (I checked the network ). So I changed my proxy settings to
{...
"proxy": "35.125.320:10:3003",
...}
but anyway my app is still refreshing when Im trying to make http req. on server. I think the problem is that I can't reach my express backend server. So proxy is forwarding my requests badly.
UPDATE
scenario:(Im doing two post requests)
1) first request still passed (app is not refreshed)
2) same request passed (but sometimes app is refreshed)
3) second is still cancelled by browser.
QUESTION
How can my frontend communicate with backend server via proxy when they are running on different ports but on the same server and domain ??
Thanks for the answer.
Solution:
The problem was that I used proxy in production that is only suitable for development.
I added this line in my express.js server :
app.use(express.static(`${process.cwd()}/build`));
app.use(express.static(`${process.cwd()}/public`));
I make a build and serve js,css files from my build folder. And also I needed serve static files (images, folders, etc...) from my public folder.
This problem can also cause cancelling http request by browser on production. Means, requests weren't able to reach server.
To make your app publicly available, you will want to make a production build. You mentioned in a comment that you "run npm build and then serve this build as static file in express.js". This is a great way to make your react app publicly available. As it says in the create-react-app documentation:
npm start or yarn start
Runs the app in development mode.
When running yarn start or npm start, you are also given a notification that says "Note that the development build is not optimized." The best option will be to run yarn build or npm build and find a way to serve those static files as you are doing.
I have a Node Express app that runs with no problems on the OpenShift cloud. It accepts router.post and router.get just fine. But I want the OpenShift app to also post some data to a Raspberry Pi which is also running a Node and Express app. To accomplish this, on the OpenShift app, I am using the Node npm request module. When I run it in the Node dev space at http://tonicdev.com, it works just fine. It sends its JSON data to the RPi and the Pi accepts and processes it.
But when I run the exact same request code in the node app in the OpenShift cloud it crashes with an EACCES error. Note that I am using the request-debug module which displays the request module headers immediately before the crash. They are as expected to be.
What in the devil is going on in the OpenShift environment that is changing the request module's execution causing it to crash, and how can I address it?
OpenShift Online currently has a whitelist of acceptable outbound ports, and port 3000 is probably not in the list. We can add ports to the list, but typically we only do this for well-known service ports. Can you run the remote service on a different port (such as 80 or 8080)?