I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)
Related
Here is my nginx default
server {
listen 80;
listen [::]:80;
server_name _;
location /login-with-args.html {
alias /opt/code-server/login-with-args.html;
}
}
And I am having login-with-args.html at this /opt/code-server/login-with-args.html location but the curl command in linux is giving me 200 but in my browser is it showing me 502 error.
this is what the url I am hitting from UI
https://url/login-with-args.html?password=1233&Id=12123&Code=sand-42&port=8127
Generally I could have advised you to enable error logging and check the corresponding log.
But as far as I see, there are mismatches in your question. Your configuration contains listen 80; which, in generally, means plain HTTP, except if you're add ssl parameter (anyway I'd not recommend you to enable SSL/TLS on port 80). But URL you try to request is:
https://url/login-with-args.html?password=1233&Id=12123&Code=sand-42&port=8127
which assumes using HTTPS on port 443 (by default, if not specified other).
At the same time there is no reverse proxy defined in your configuration. You just aliased static file.
Since you got 502 error, you nginx is either located behind some proxy (or CDN) or have another server section, containing listen with ssl parameter and reverse proxy definition somewhere in configuration.
My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?
I have a small app written with NodeJS and it is hosted in Google Cloud. I reserved a IP and I can access the front of app with IP.
The problem is, I have an admin panel witch it is a different Node instance. This has his own port and I want to access it via url, like: http://admin.11.111.11.11
I've using NGINX with Ubuntu 20.4
Config for admin it looks like:
server {
listen 80;
listen [::]:80;
#server_name admin.11.111.11.111/ www.admin.11.111.11.111/;
location / {
#proxy_pass http://127.0.0.1:2222;
}
}
and for front:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name mydomain.com www.mydomain.com;
location / {
proxy_pass http://127.0.0.1:1111;
}
}
At this moment I can't transfer the domain. I must wait client to finish writing his content. The mydomain.com use an old CMS so we must wait to finish to transfer the content, so the new app it is accessible just through new IP.
Thank you for any hint!
This has his own port and I want to access it via url, like: http://admin.11.111.11.11
No, this isn't possible admin.11.111.11.11 is not a valid hostname. You can't mix hostnames and IP addresses as the host like that. This whole premise is flawed... this isn't an Nginx problem.
I'm currently running: nginx/1.12.2 with Phusion Passenger 5.1.12.
Whenever I try to use "passenger_log_file", "passenger_default_user" or "passenger_default_group" I get the following error.
nginx: [emerg] "passenger_log_file" directive is not allowed here in /etc/nginx/sites-enabled/test:11
My sites-enabled file looks like this:
server {
listen 80;
server_name example.com;
root /home/test/api;
passenger_app_root /home/test/api/dist;
passenger_enabled on;
passenger_app_type node;
passenger_startup_file app.js;
passenger_log_file /var/log/test/access.log;
}
If I remove the passenger_log_file everything works perfectly fine. What am I missing here?
I had the same issue. I think the correct option for your case is passenger_app_log_file which allow to define a log file for each app. Unfortunately this is an Enterprise level option. In other hand if you move your passenger_log_file a level up it should change all your passenger apps log file from the default one.
passenger_log_file /var/www/vhosts/system/xxx.pasi-consulting.com/logs/proxy_error_log;
server {
listen 137.74.195.27:443 ssl http2;
server_name www.pasi-consulting.com;
server_name www.xxx.pasi-consulting.com;
}
Per the documentation
passenger_log_file is only available at the http context level. In your provided example it is in the server context.
Both passenger_default_user and passenger_default_group are allowed in the server context so that is confusing.
I am running two instances of node.js servers serving same code on two different port numbers on same machine. For example, one node.js process is running on port 8080 and other on port 1337.
I need to put nginx proxy in front of both these servers and route any request coming to to these servers.
The reason I want to do this is because lets say I have only one server. I need to change the code and restart the server and it takes nearly 1 minute for the server to restart. During this time, any requests coming to the server will return 502 Bad Gateway Error. I want to avoid this situation by running a replica of the same server.
Now I want to setup nginx in such a way that whenever either one of them is down (restarting while doing git pull), requests should be routed to the other one.
How can I accomplish this setting and where should I start reading about this ?
Requirement: suppose you have 2 application both running on different port at same machine.You have purchased only one domain and want to use it for multiple application.suppose below is the requirement.
purchased domain: example.com
app1_name/app1_port: app1/8081
app2_name/app2_port: app2/8082
You have two option here, you can choose any one you like
1st way: http://example.com/app1, http://example.com/app2
2nd way: http://app1.example.com, http://app2.example.com
Below are final configuration for both type
1st type using url resource indicator and multiple location
server{
listen 80;
server_name example.com;
location /app1{
proxy_pass http://localhost:8081;
}
location /app2{
proxy_pass http://localhost:8082;
}
}
2nd type using subdomain and multiple server block
server{
listen 80;
server_name example.com;
}
server{
server_name app1.example.com;
location /{
proxy_pass http://localhost:8081;
}
}
server{
server_name app2.example.com;
location /{
proxy_pass http://localhost:8082;
}
}
Note: Ideally nginx should run on port 80 and should mapped to main domain. So that you dont have to type port in browser as 80 is default port for http request. You might have to add additional configuration parameter, above is just for demo purpose.