I'm using Autobahn with Asyncio to build a lightweight socket server separate from my flask app. I have it all working but in order to route the traffic accordingly I put the two servers behind HAProxy. I am successfully getting requests to the server, but then the connection closes and server reports:
WebSocket connection closed: connection was closed uncleanly (port 9001 in HTTP Host header 'localhost:9001' does not match server listening port 4000)
So, the header does not match what the server is expecting. Is there any way to change this?
I am using Autobahn-python version 0.10.9 with Python 3.4. Here is my server code:
from autobahn.asyncio.websocket import WebSocketServerProtocol, \
WebSocketServerFactory
import asyncio
import json
class SimpleServer(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {0}".format(request.peer))
def onOpen(self):
print("WebSocket connection open.")
#asyncio.coroutine
def onMessage(self, payload, isBinary):
if not isBinary:
self.sendMessage(payload, isBinary)
else:
self.sendMessage(payload, isBinary)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
factory = WebSocketServerFactory(u"ws://127.0.0.1:4000", debug=False)
factory.protocol = SimpleServer
loop = asyncio.get_event_loop()
coro = loop.create_server(factory, '127.0.0.1', 4000)
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
server.close()
loop.close()
HAProxy is version 1.4.18 and the config is:
global
log 127.0.0.1› local0
log 127.0.0.1› local1 notice
maxconn 4096
user root
group sudo
debug
#quiet
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend public
bind *:9001
acl is_websocket hdr(Upgrade) -i WebSocket
use_backend ws if is_websocket
default_backend www
backend www
timeout server 30s
server www1 127.0.0.1:3000
backend ws
timeout server 600s
server ws1 127.0.0.1:4000
I am running Ubuntu 12.04. Thanks for the help
Ok for anyone stuck on this, it was fixed by changing
factory = WebSocketServerFactory(u"ws://127.0.0.1:4000", debug=False)
to:
factory = WebSocketServerFactory()
Apparently, specifying the url in the Factory will cause Autobahn to run checks on the headers.
Related
I've a Cherrypy website that is running well on https, I can run the same server on http port without forward like this:
from cherrypy._cpserver import Server
server2 = Server()
server2.socket_host = "123.123.123.123"
server2.socket_port = 80
server2.subscribe()
I can run another Cherrypy instance on http port and forward it https by having rise cherrypy.HTTPRedirect in the class:
class HelloWorld(object):
#cherrypy.expose
def index(self):
raise cherrypy.HTTPRedirect("https://example.com", status=301)
Is there a way to forward http to https without running another server or using 3rd party service?
I am new to vagrant using Windows 10. Started a course on Udacity(Full stack foundation). Now i have created a simple web server script but when i am testing it on localhost:8080 showing error : site can't be reached. Tried a lot but unable to find the solution. On netstat -aon showing 0.0.0.0:8080 listening on pid no 1072 but pid is not there in running servies (using Clt+Alt+Del).
web server script - webserver.py
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class WebServerHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path.endswith("/hello"):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
message = ""
message += "<html><body>Hello!</body></html>"
self.wfile.write(message)
print message
return
if self.path.endswith("/hola"):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
message = ""
message += "<html><body>Hola!</body></html>"
self.wfile.write(message)
print message
return
else:
self.send_error(404, 'File Not Found: %s' % self.path)
def main():
try:
port = 8080
server = HTTPServer(('', port), WebServerHandler)
print "Web Server running on port %s" % port
server.serve_forever()
except KeyboardInterrupt:
print " ^C entered, stopping web server...."
server.socket.close()
if __name__ == '__main__':
main()
Code is executing and terminating properly with no error but on testing it on http://localhost:8080/hello (site can't be reached).
Help me. Thanks in Advance.
Try to access the link with curl from the VM: curl http://localhost:8080/hello.
If the response is OK, it means either port 8080 is not forwarded, or maybe it's blocked by vagrant VM firewall. You can open your port for testing purpose using ufw.
I changed the host and port in the Python App from this:
run(host='localhost', port=8080, debug=True)
to this:
run(host='0.0.0.0', port=5002, debug=True)
and in the vagrantfile from this:
config.vm.network "forwarded_port", guest: 8080, host: 8080, host_ip: "127.0.0.1"
to this:
config.vm.network "forwarded_port", guest: 5002, host: 5002
then test a request and do NOT forget to add the resource (URL endpoint) 'hello' in my example: http://0.0.0.0:5002/hello
this fixed the issue for me.
I started working on a simple server and client script. I tested the script on my local network, and it worked great: The server would turn on, and wait for a client connection. As soon as a client connected, it would then let me proceed.
I then decided to test it over the internet, and this is were the problems start happening. I am running the server on Ubuntu, and the client on a windows machine.
Server Connection Code:
import socket
import sys
#Create a socket for connection
def socket_create():
try:
global host
global port
global s
host = ''
port = 5698
s = socket.socket()
print("Socket created.")
except socket.error as msg:
print("Socket creation error: " + str(msg))
#Bind the created socket to a port, sleep for conn
def socket_bind():
try:
global host
global port
global s
s.bind((host, port))
print("Waiting for connection")
s.listen(5)
except socket.error as msg:
print("SOcket binding error: " + str(msg) + "\n Retrying...")
socket_bind()
#Estabilish Connection with client
def socket_accept():
conn, adress = s.accept()
print("Connection has been estabilished | " + "IP " + adress[0] + " | Port " + str(adress[1]))
send_command(conn)
conn.close()
Client Connection Code:
import os
import socket
import subprocess
# Create a socket
def socket_create():
try:
global host
global port
global s
host = 'My Internet IP'
port = 5987
s = socket.socket()
except socket.error as msg:
print("Socket creation error: " + str(msg))
# Connect to a remote socket
def socket_connect():
try:
global host
global port
global s
s.connect((host, port))
except socket.error as msg:
print("Socket connection error: " + str(msg))
And the error:
[Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I'll never get the "Connection has been estabilished" print from the server, and I'll get that error.
Can anyone see anything that might be causing the problem in the code?
After port forwarding the router, do I have to do any other configs on Ubuntu itself? The ports I should open are as TCP, right?
After opening the port on the router, and if I use a service like "http://www.canyouseeme.org/", will it show instantly that the port is open, or will it only show if I'm running the server and waiting for a connection?
I managed to fix the problem. Here is an in-depth guide on how I did it.
The problem: Even after opening a port on your Router configs, you still can't see the port open on your running service.
The solution: Port Mapper.
Things to note: I had to run Port Mapper on Ubuntu, because running it on Windows didn't seem to work for me. Also, if you let your computer sleep or shutdown, when you turn it on again, you'll have to reopen the ports (but don't worry, as it is just a click of a button).
What you'll need: https://sourceforge.net/projects/upnp-portmapper/
First, simply run 'java' in the terminal to make sure you have Java installed, or in order to install it (directions will appear on screen).
From the given link, download the Portmapper.jar.
After downloading it, simply run 'java -jar Portmapper.jar' on the terminal to open up the gui.
After opening the gui, press Connect so you can automatically connect to the router.
All the current open ports will now appear on screen. We know want to look for the port mapping presets.
In the Port mapping presets, go ahead and press Create.
Here, give the preset a name. Then, fill the Remote Host if you want the connection of a specific IP, or leave it empty for any IP. The internal client will be your Server's network IP (in my case, because I'm running the server in the same machine as the Port mapper, I'll tick Use Local Host.
Now we'll go ahead and add a new port as a TCP connection. Here we can either have the external and internal ports with equal or different values. Just remember the internal port (your machine's port) will be the one you'll use on your server, and the external port (your router open port) will be the one you'll use on your clients or whatever you are connecting to your server.
After this, simply save the preset, choose it and press Use. If you know click Update under the ports list, you'll see your new open port. Just to make sure, you can get your server running awating connections, and simply go to "http://www.canyouseeme.org/", input the port, and here you go.
Do remember that after shutting down or putting the computer asleep, you'll have to go back to PortMapper, and click Use on the preset you want again (depending on what port you want).
I've got a server setup in NodeJS which looks like the picture below:
Now what i want to do two things which seem to be possible with HAProxy:
To only use one port no matter what server a client wants to access. I want to use the external port 8080 for all non SSL
traffic. (All SSL traffic should use the port 443)
Enable SSL on the SockJS Server and the Express Server.
Please not that all my servers are running on the same instance on an amazon ec2. So i want to internally route the traffic.
This is my haproxy.cfg so far:
mode http
# Set timeouts to your needs
timeout client 10s
timeout connect 10s
timeout server 10s
frontend all 0.0.0.0:8080
mode http
timeout client 120s
option forwardfor
# Fake connection:close, required in this setup.
option http-server-close
option http-pretend-keepalive
acl is_sockjs path_beg /echo /broadcast /close
acl is_stats path_beg /stats
use_backend sockjs if is_sockjs
use_backend stats if is_stats
default_backend express
backend sockjs
# Load-balance according to hash created from first two
# directories in url path. For example requests going to /1/
# should be handled by single server (assuming resource prefix is
# one-level deep, like "/echo").
balance uri depth 2
timeout server 120s
server srv_sockjs1 127.0.0.1:8081
backend express
balance roundrobin
server srv_static 127.0.0.1:8008
backend stats
stats uri /stats
stats enable
Cant figure out how to route the SSL and the traffic to the TCP Server (8080 internal port)
Any ideas?
Your setup is kinda hard to understand (for me). If I understand your goals correctly, you want to serve your web service through SSL hence port 443. And from 443, connect to port 8080 (internally). If that is the case then the following configuration might be what you are looking for. It does not really use port 8080 but instead it connects directly to your express backend. You don't really need to have port 8080 exposed (unless you have special reasons for doing so) because you can just use the backend servers directly inside the frontend section.
Note that this only works for HAProxy 1.5+, if you are using older version of HAProxy, you should put something to tunnel the SSL connection before it reaches HAProxy (But I strongly suggest 1.5 because it makes your setup less complex)
frontend ssl
bind *:443 ssl crt /path/to/cert.pem ca-file /path/to/cert.pem
timeout client 120s
option forwardfor
# Fake connection:close, required in this setup.
option http-server-close
option http-pretend-keepalive
acl is_sockjs path_beg /echo /broadcast /close
acl is_stats path_beg /stats
use_backend sockjs if is_sockjs
use_backend stats if is_stats
default_backend express
I have Ubuntu 12.04LTS running. My webserver is Tomcat 7.0.42 and I use HAProxy as proxy server. My application is a servlet application which uses websockets.
Sometime when I request my page I get "502 Bad Gateway" error on some resources not on all, but on some. I think that this has something to do with my HAProxy configuration, which is the following:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 1
defaults
mode http
option http-server-close
option httpclose
# option redispatch
no option checkcache # test against 502 error
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server apiserver localhost:8080 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:8080 weight 1 maxconn 1024 check
What do I have to change to prevent the 502 error?
First, enable haproxy logging. It will simply tell you why it is giving the 502's. My guess is that the backend "localhost:8080" is simply not able to keep up or is not able to get a connection within 4000ms "timeout connect 4000".
You may have exceeded some of the default limits in HAProxy. Try adding the following to global section:
tune.maxrewrite 4096
tune.http.maxhdr 202
Your should replace your defaults with these ones :
# Set balance mode
balance random
# Set http mode
mode http
# Set http keep alive mode (https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4)
option http-keep-alive
# Set http log format
option httplog
# Dont log empty line
option dontlognull
# Dissociate client from dead server
option redispatch
# Insert X-Forwarded-For header
option forwardfor
Don't use http-server-close, it is likely the cause of your problems.
Keep-alive will have a connection with client and server at both side.
It is working fine with websockets as well.
And if you enable the check on the server you need to as well configure it with something like this :
# Enable http check
option httpchk
# Use server configuration
http-check connect default
# Use HEAD on / with HTTP/1.1 protocol for Host example.com
http-check send meth HEAD uri / ver HTTP/1.1 hdr Host example.com
# Expect status 200 to 399
http-check expect status 200-399