Hi I am returning 200 on local / requests from the same IP but the below error when I hotspot my mobile phone to change IP and make the request
HTTPConnectionPool(host={myIPAddress}, port=80): Max retries exceeded
..... Failed to establish a new connection: [WinError 10060] A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond'
program.cs file
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseUrls("http://0.0.0.0:5000/")
.UseKestrel(serverOptions =>
{
//Set properties and call methods on serverOptions
serverOptions.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(2);
serverOptions.Limits.MaxConcurrentConnections = 100;
serverOptions.Limits.MaxConcurrentUpgradedConnections = 100;
serverOptions.Limits.MaxRequestBodySize = 10 * 1024;
serverOptions.Limits.RequestHeadersTimeout = TimeSpan.FromMinutes(1);
});
}
NGINX - sites-available/default
server {
listen 80;
listen [::]:80;
server_name HFTest;
location / {
proxy_pass http://localhost:5000; **#edited as per comments**
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Port Status - sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 635/nginx: master p
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 628/sshd
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 2048/API/HFTest
Any ideas on what to troubleshoot next?
Thankyou
EDIT:
Request - in python
scopes = 'openid'
url='http://{IPAddress}:80/connect/token'
session = OAuth2Session(clientID,clientSecret, scope=scopes)
token = session.fetch_access_token(url,verify=False)
endpoint = "http://{IPAddress}/{endpoint}"
headers = {"Authorization": "Bearer %s" % token['access_token'],"Accept": "text/xml"}
session = requests.Session()
response = session.request("GET",endpoint,headers=headers,verify = False)
print(response.status_code)
print(response.text)
The upstream service ip in your nginx configuration needs to be an actual ip address. You can try the localhost ip as follows
…
location / {
proxy_pass http://127.0.0.1:5000;
…
Related
im running a Nodejs app on a Ec2 instance. The app runs node-rtsp-stream that outputs a websocket witch then uses with jsmpeg to display on the web browser.
NGINX config port 80 (this works fine)
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
NGINX websocket config (this doesnt)
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server ws://localhost:9999;
}
server {
listen 80;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
app.js
Stream = require('node-rtsp-stream')
stream = new Stream({
name: 'stream',
streamUrl: 'rtsp://demo:demo#ipvmdemo.dyndns.org:554/onvif-media/media.amp',
wsPort: 9999,
ffmpegOptions: { // options ffmpeg flags
'-stats': '', // an option with no neccessary value uses a blank string
'-r': 30 // options with required values specify the value after the key
}
})
Script tag on HTML
player = new JSMpeg.Player('ws://localhost:9999', {
canvas: document.getElementById('canvas')
Should this be calling for 'ws://localhost:9999' or something else?. On Browser says that can not find 'ws://localhost:9999'
Each nginx config file is stored in sites-available
Thanks for your time!!!
I have a server running Ubuntu 20.04, set up with an SSH Key, and I have installed Nano using Docker -
docker run --restart=unless-stopped -d -p 7075:7075/udp -p 7075:7075 -p 127.0.0.1:7076:7076 -p 127.0.0.1:7078:7078 -v /root/nano/:/root --name nano nanocurrency/nano:latest
On the VPS GUI ports 22, 443, and 7075 are open for inbound traffic.
I have a working WebSocket running on localhost:7078 which returns a stream of data to the terminal.
Nginx version 1.18.0 is installed and when I run systemctl status nginx it claims to be active with no errors, but when I try to connect from my laptop nothing happens.
Here's my Nginx config -
server {
listen 443;
# host name to respond to
server_name 000.000.00.000;
location / {
# switch off logging
access_log off;
# redirect all HTTP traffic to localhost:7080
proxy_pass http://localhost:7080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
And node.js Websocket -
const WS = require('ws');
const ReconnectingWebSocket = require('reconnecting-websocket');
// Create a reconnecting WebSocket.
// In this example, we wait a maximum of 2 seconds before retrying.
const ws = new ReconnectingWebSocket('ws://localhost:7078', [], {
WebSocket: WS,
connectionTimeout: 1000,
maxRetries: 100000,
maxReconnectionDelay: 2000
});
// As soon as we connect, subscribe to block confirmations
ws.onopen = () => {
const subscription = {
"action": "subscribe",
"topic": "confirmation"
}
ws.send(JSON.stringify(subscription));
};
// The node sent us a confirmation
ws.onmessage = msg => {
console.log(msg.data);
if (msg.data.topic === "confirmation") {
console.log ('Confirmed', data.message.hash)
}
};
Impossible to increase buffer width to avoid dropping frames
OR
Unable to manage WS fragmentation correctly
Summary
My goal:
A very simple thing: have websocket tunnel to transfer at least 2/3 MB of data per tunnel. I need to send directory structure, therefore the data can be very many
The problem:
Sending WebSocket messages over 17KB, from A to B, cause a "communication lost" or packet drop/loss; the connection/tunnel remains up with the inability to send new messages over the same tunnel, from A to B; conversely, from B to A continues to work.
I must restart the tunnel to get functionality back.
It could also be an idea, the management of the packet heap that restarts the tunnel when the threshold is reached, but it is clear that I need to send more than the threshold at one time.
The "signal path":
GoLang app(Client) ---> :443 NGINX Proxy(Debian) ---> :8050 NodeJS WS Server
The tests:
Sending X messages/chunks of 1000 byte each | messages are received up to the 17th chunk, the following ones are not received (see below)
The analyses:
Wireshark on Go app shows the flow of all packets
tcpdump, on Debian machine, set to listen on eth (public), shows the flow of all packets
tcpdump, on Debian machine, set to listen on lo interface (for rev proxy scanning), shows the flow of all packets
NodeJS/fastify-websocket ws.on('message', (msg)=>{console.log(msg)}) shows up to the 17th chunk
Code & Config:
GoLang app relevant part
websocket.DefaultDialer = &websocket.Dialer{
Proxy: http.ProxyFromEnvironment,
HandshakeTimeout: 45 * time.Second,
WriteBufferSize: 1000, //also tried with 2000, 5000, 10000, 11000
}
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
wsConn = c
bufferChunk := 1000
bufferSample := ""
for j := 7; j <= bufferChunk; j++ {
bufferSample = bufferSample + "0"
}
i := 1
for {
sendingBytes := i * bufferChunk
fmt.Println(strconv.Itoa(sendingBytes) + " bytes sent")
wsConn.WriteMessage(websocket.TextMessage, []byte(bufferSample))
i++
time.Sleep(1000 * time.Millisecond)
}
NGINX conf:
upstream backend {
server 127.0.0.1:8050;
}
server {
server_name my.domain.com;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
proxy_pass http://backend;
proxy_redirect off;
#proxy_buffering off; ### ON/OFF IT'S THE SAME
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade"; ### "upgrade" it's the same
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name my.domain.com;
listen 80;
return 404; # managed by Certbot
}
NodeJS code:
//index.js
const config = require("./config.js");
const fastify = require('fastify')();
const WsController = require("./controller");
fastify.register(require('fastify-websocket'), {
/*these options are the same as the native nodeJS WS*/
options :{
maxPayload: 10 * 1024 * 1024,
maxReceivedFrameSize: 131072,
maxReceivedMessageSize: 10 * 1024 * 1024,
autoAcceptConnections: false
}
});
fastify.ready(err => {
if (err) throw err
console.log("Server started")
fastify.websocketServer
.on("connection", WsController)
})
//controller.js
module.exports = (ws, req) => {
ws.on("message", (msg) => {
log("msg received"); //it is shown as long as the tunnel does not "fill" up to 17KB
})
})
SOLVED
Updating fastify and fastify-websocket the problem disappeared. What a shame!
I came up with this solution by creating a new cloud instance and installing everything from scratch.
Just npm update.
Thank you all for your support
I have an application using socket.io with Node and Express. I'm also using AWS EC2 and Nginx.
I'm getting a timeout with socket io.
The error is:
GET https://vusgroup.com/socket.io/?EIO=3&transport=polling&t=MnUHunS 504 (Gateway Time-out)
Express file:
var port = 8090;
host = 'https://18.237.109.96'
var app = express(host);
var webServer = http.createServer(app);
...
// Start Socket.io so it attaches itself to Express server
var socketServer = socketIo.listen(webServer, {"log level":1});
//listen on port
webServer.listen(port, function () {
console.log('listening on http://localhost:' + port);
});
Nginx file:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
}
server {
server_name vusgroup.com www.vusgroup.com; # managed by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
...
ssl stuff
...
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
location /socket.io/ {
proxy_pass http://18.237.109.96:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I've tried chaning the proxypass for socket.io to http://18.237.109.96:8090; but that gave me a 400 error.
I've been trying to get websockets to work on heroku over nginx and seem to be stuck. I'm using the nginx-buildpack which has worked great but I haven't had any success thus far getting an upgraded websocket connection.
Here is my nginx.conf.erb, which is just slightly modified from the buildpack example...
daemon off;
#Heroku dynos have 4 cores.
worker_processes 4;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
log_format l2met 'measure.nginx.service=$request_time request_id=$http_heroku_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
proxy_read_timeout 950s;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location /test {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:3000;
}
location / {
proxy_pass http://127.0.0.1:3001;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
To test this configuration< i've just been using the websocket.org Echo Test. Unfortunately it is not able to connect.
On port 3001 I've just got a simple socket.io server, and I'm logging any connections (none so far...)...
var app2 = require('http').createServer().listen(3001);
var io = require('socket.io').listen(app2);
io.on('connection', function(socketconnection){
socketconnection.send("Connected to Server-1");
console.log("connected to websocket server!");
socketconnection.on('message', function(message){
socketconnection.send(message);
});
});
When I try connecting with the websockets tester this is the error that I get in my heroku logs...
*3 upstream prematurely closed connection while reading response
header from upstream, client: 10.140.231.210, server: _, request: "GET /? encoding=text HTTP/1.1", upstream: "http://127.0.0.1:3001/?encoding=t
ext", host: "www.mydomain.com"
Any ideas what I may be doing wrong here?
UPDATE 1 :
Ok, I so I've found that if I alter the section of my config where it is says :
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
to:
map $http_upgrade $connection_upgrade {
default upgrade;
}
this seems to prevent that error from occurring. I suppose this is due to a blank response being returned on connect. However, now that I've done this I get a new error:
*5 connect() failed (111: Connection refused) while connecting to upstream,
client: 10.99.212.2, server: _, request: "GET /?encoding=text HTTP/1.1",
upstream: "http://127.0.0.1:3001/?encoding=text", host: "www.mydomain.com"
I suppose this may be an issue unrelated to the first, but I'm not sure!