why does my python wsgi server function run twice per request? - python-3.x

I wrote a simple web server with python3 and the WSGI modules:
#!/usr/bin/python3
from wsgiref.simple_server import make_server
port = 80
count = 0
def hello_app(environ, start_response):
global count
status = '200 OK' # HTTP Status
headers = [('Content-type', 'text/plain')] # HTTP Headers
start_response(status, headers)
response = "hello number {}".format(count)
count += 1
return( [response.encode()] )
httpd = make_server('', port, hello_app)
print("Serving HTTP on port {}...".format(port))
# Respond to requests until process is killed
httpd.serve_forever()
And it works fine, but every time I make a request from a broswer, the count increments by 2, not by one. If I comment out the "count += 1", it just stays at zero. Why?

Problem was a very silly one - the browser was requesting the "favicon" file automatically for every request. Fixed by using the solution here.

Related

Python Test Code for URL on IP-Address only

I'm a bit lost in Google-results.
I have a webserver running through a Python built-in module.
This works and provides an area to download automation configuration for devices.
However, I like to build an availability test to provide better code.
If http service is not available, then proceed with tftp server.
I tested with urllib2, but then the website gets timed out. If I try the same code with a named uri 'www.google.com' then the response code 200 is provided.
Any thoughts to provide a good solution?
in the code I provide a ping test.
now like to add 'service-available' test. on TRUE - proceed with GET.
should work on IP-ADDRESS eg.: 10.1.1.1:80
Thank you for the help.
code ex:
import httplib2
h = httplib2.Http(".cache")
(resp_headers, content) = h.request("http://10.10.1.1:80/", "GET")
print(resp_headers)
print(content)
Try this. On a successful ping, the script will attempt to connect to an HTTP service. If the request times out or is not successful (e.g., status 404), it will call your TFTP function:
import shlex
import socket
import subprocess
import httplib2
ip_address = "10.1.1.1"
print("Attempting to ping...")
p = subprocess.Popen(shlex.split("ping -c 3 {0}".format(ip_address)), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, _ = p.communicate()
rc = p.returncode
if rc != 0 or "100% packet loss" in output.decode():
raise RuntimeError("Unable to connect to {0}.".format(ip_address))
print("Ping successful.")
print("Attempting to connect to an HTTP service....")
h = httplib2.Http(cache=".cache", timeout=10)
response = {}
content = ""
try:
response, content = h.request("http://{0}:80".format(ip_address), method="GET")
if 200 <= int(response["status"]) <= 299:
print("Connected to HTTP service.")
for key, value in response.items():
print(key, ":", value)
print(content.decode())
else:
print("Request not OK. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here
except socket.timeout:
print("Request timed out. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here

My if & elif not working on sockets (python 3)

I tried to create a server that receives commands from the client
And to identify which command the client wrote I used if & elif
But when I run the program and write a command from the client, only the first command works (the command on the if) and if I try another command (from elif & else)
The system just doesn't respond (like she's waiting for something)
The Server Code:
import socket
import time
import random as rd
soc = socket.socket()
soc.bind(("127.0.0.1", 7777))
soc.listen(5)
(client_socket, address) = soc.accept()
if(client_socket.recv(4) == b"TIME"):
client_socket.send(time.ctime().encode())
elif(client_socket.recv(4) == b"NAME"):
client_socket.send(b"My name is Test Server!")
elif(client_socket.recv(4) == b"RAND"):
client_socket.send(str(rd.randint(1,10)).encode())
elif(client_socket.recv(4) == b"EXIT"):
client_socket.close()
else:
client_socket.send(b"I don't know what your command means")
soc.close()
The Client Code:
import socket
soc = socket.socket()
soc.connect(("127.0.0.1", 7777))
client_command_to_the_server = input("""
These are the options you can request from the server:
TIME --> Get the current time
NAME --> Get the sevrer name
RAND --> Get a Random int
EXIT --> Stop the connect with the server
""").encode()
soc.send(client_command_to_the_server)
print(soc.recv(1024))
soc.close()
if(client_socket.recv(4) == b"TIME"):
client_socket.send(time.ctime().encode())
This will check the first 4 byte received from the server
elif(client_socket.recv(4) == b"NAME"):
client_socket.send(b"My name is Test Server!")
This will check the next 4 bytes received from the server. Contrary to what you assume it will not check the first bytes again since you called recv to read more bytes. If there are no more bytes (likely, since the first 4 bytes are already read) it will simply wait. Instead of calling recv for each comparison you should call recv once and then compare the result against the various strings.
Apart from that: recv will only return up to the given number of bytes. It might also return less.

Close a HTTP Proxy Server

I have a little problem with my code, I'm doing a HTTP Proxy Server and I send it a random number of HTTP Request and I want that my program close when I stop of send.
I think the problem is in the accept because the program still working always
I tried to put a recv after the accept for checking if there if empty but the program does't arrive there
My code is the following
from socket import *
from _thread import *
MAX_DATA_RECV = 4096 # max number of bytes we receive at once
def start(port_5, my_port):
s=socket(AF_INET, SOCK_STREAM)
s.bind(('', my_port))
s.listen(1)
while 1:
try:
conn, client_addr = s.accept()
except KeyboardInterrupt:
print('\nProgram closed. Interrupted by the user')
exit()
proxy_thread(conn, client_addr)
s.close()
def proxy_thread(conn, client_addr):
# get the request from browser
request = conn.recv(MAX_DATA_RECV).decode('utf-8')
# parse the first line
first_line = request.split('n')[0]
# get url
url = first_line.split(' ')[1]
# find the webserver and port
http_pos = url.find("://") # find pos of ://
if (http_pos==-1):
temp = url
else:
temp = url[(http_pos+3):] # get the rest of url
port_pos = temp.find(":") # find the port pos (if any)
# find end of web server
webserver_pos = temp.find("/")
if webserver_pos == -1:
webserver_pos = len(temp)
webserver = ""
port = -1
if (port_pos==-1 or webserver_pos < port_pos): # default port
port = 80
webserver = temp[:webserver_pos]
else: # specific port
port = int((temp[(port_pos+1):])[:webserver_pos-port_pos-1])
webserver = temp[:port_pos]
print("Connect to:", webserver, port)
# create a socket to connect to the web server
s = socket(AF_INET, SOCK_STREAM)
s.connect((webserver, port))
s.send(request.encode()) # send request to webserver
print(temp)
while 1:
# receive data from web server
data = s.recv(MAX_DATA_RECV)
if (len(data) > 0):
# send to browser
conn.send(data)
else:
break
s.close()
conn.close()
If someone is able to help me, thanks in advance
I want that my program close when I stop of send
At the network level there is no implicit indicator that the other side will not send anymore. If all requests would be done through a single TCP connection then the end of the connection might be treated as such an indicator. But you are using a new TCP connection for every request so you need to define your own condition(s) how the server should determine that the client will not send anymore.
This could for example a timeout, i.e. if the client has not sent any more request for 20 seconds then the client is treated as dead. Or it might be a special message by the client to signal the end - in which case your code needs to explicitly look for this message.

subprocess stdout into a variable for Ngrok

I'm trying to make a python script wich reports me the port on 0.tcp.ngrok.io is started when I run the code on terminal (after moving ngrok executable file to /usr/local/bin)
ngrok tcp 22
I get ths kind of output
ngrok by #inconshreveable (Ctrl+C to quit)
Session Status connecting
Version 2.2.4
Region United States (us)
Web Interface http://127.0.0.1:4041
Forwarding tcp://0.tcp.ngrok.io:13014 -> localhost:22
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
My first attempt is to log the subprocess stdout to a variable , but as the stdout is cyclic the stdout.read() never ends this is the code
import subprocess
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
output_text = ngrok.stdout.read() # script stops here forever
[**code for getting domain:port from output_text**]
how can I get a "snapshot" of stdout to a variable , without stoping ngrok?
Is there another way of doing this (next try would be a webscraper on localhost , but it would be nice to have this knowledge for other commands , such as "top")
thanks in advance
I had the same issue when I was working with ngrok http and all the alternatives weren't working, resulting in deadlocks and I even could print the child process response created with ngrok. Thus, reading the ngrok docs I notice that there is a way to get the ngrok public url with requests.
Adding the code below:
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
So, tunnel_url will return what you need. Adding the imports the full code would be like this:
import subprocess
import requests
import json
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
not enough reputation to comment, feel free to update or comment on #Rodolfo great answer and then delete this
Perhaps they've changed the api slightly, this is what worked for me:
(ngrok executable next to the script, serving http on port 5000 and picking the https tunneling url)
import subprocess
import requests
import json
import time
if __name__ == '__main__':
ngrok = subprocess.Popen(['./ngrok','http','5000'],
stdout=subprocess.PIPE)
time.sleep(3) # to allow the ngrok to fetch the url from the server
localhost_url = "http://localhost:4040/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['tunnels'][1]['public_url'] #Do the parsing of the get
print(tunnel_url)
This is working for me:
def open_tunnel():
process = subprocess.Popen(f'/snap/bin/ngrok tcp {PORT} --log "stdout"', shell=True, stdout=PIPE, stderr=PIPE) # You can also use a list and put shell = False
while True:
output = process.stdout.readline()
if not output and process.poll() is not None:
break
elif b'url=' in output:
output = output.decode()
output = output[output.index('url=tcp://') + 10 : -1]
return output.split(':')
I use /snap/bin/ngrok because my pycharm does not recognize path, sorry for that. You can replace it by saying only ngrok
.

Python 3.4 - How to 'run' another script python script continuously, How to pass http get / post to socket

This question is two-fold.
1. So I need to run code for a socket server that's all defined and created in another.py, Clicking run on PyCharm works just fine, but if you exec() the file it just runs the bottom part of the code.
There are a few answers here but they are conflicting and for Python 2.
From what I can gather there are three ways:
- Execfile(), Which I think is Python 2 code.
- os.system() (But I've seen it be said that it's not correct to pass to the OS for this)
- And subprocess.Popen (unsure how to use this either)
I need this to run in the background, it is used to create threads for sockets for the recv portion of the overall program and listen on those ports so I can input commands to a router.
This is the complete code in question:
import sys
import socket
import threading
import time
QUIT = False
class ClientThread(threading.Thread): # Class that implements the client threads in this server
def __init__(self, client_sock): # Initialize the object, save the socket that this thread will use.
threading.Thread.__init__(self)
self.client = client_sock
def run(self): # Thread's main loop. Once this function returns, the thread is finished and dies.
global QUIT # Need to declare QUIT as global, since the method can change it
done = False
cmd = self.readline() # Read data from the socket and process it
while not done:
if 'quit' == cmd:
self.writeline('Ok, bye. Server shut down')
QUIT = True
done = True
elif 'bye' == cmd:
self.writeline('Ok, bye. Thread closed')
done = True
else:
self.writeline(self.name)
cmd = self.readline()
self.client.close() # Make sure socket is closed when we're done with it
return
def readline(self): # Helper function, read up to 1024 chars from the socket, and returns them as a string
result = self.client.recv(1024)
if result is not None: # All letters in lower case and without and end of line markers
result = result.strip().lower().decode('ascii')
return result
def writeline(self, text): # Helper func, writes the given string to the socket with and end of line marker at end
self.client.send(text.strip().encode("ascii") + b'\n')
class Server: # Server class. Opens up a socket and listens for incoming connections.
def __init__(self): # Every time a new connection arrives, new thread object is created and
self.sock = None # defers the processing of the connection to it
self.thread_list = []
def run(self): # Server main loop: Creates the server (incoming) socket, listens > creates thread to handle it
all_good = False
try_count = 0 # Attempt to open the socket
while not all_good:
if 3 < try_count: # Tried more than 3 times without success, maybe post is in use by another program
sys.exit(1)
try:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create the socket
port = 80
self.sock.bind(('127.0.0.1', port)) # Bind to the interface and port we want to listen on
self.sock.listen(5)
all_good = True
break
except socket.error:
print('Socket connection error... Waiting 10 seconds to retry.')
del self.sock
time.sleep(10)
try_count += 1
print('Server is listening for incoming connections.')
print('Try to connect through the command line with:')
print('telnet localhost 80')
print('and then type whatever you want.')
print()
print("typing 'bye' finishes the thread. but not the server",)
print("eg. you can quit telnet, run it again and get a different ",)
print("thread name")
print("typing 'quit' finishes the server")
try:
while not QUIT:
try:
self.sock.settimeout(0.500)
client = self.sock.accept()[0]
except socket.timeout:
time.sleep(1)
if QUIT:
print('Received quit command. Shutting down...')
break
continue
new_thread = ClientThread(client)
print('Incoming Connection. Started thread ',)
print(new_thread.getName())
self.thread_list.append(new_thread)
new_thread.start()
for thread in self.thread_list:
if not thread.isAlive():
self.thread_list.remove(thread)
thread.join()
except KeyboardInterrupt:
print('Ctrl+C pressed... Shutting Down')
except Exception as err:
print('Exception caught: %s\nClosing...' % err)
for thread in self.thread_list:
thread.join(1.0)
self.sock.close()
if "__main__" == __name__:
server = Server()
server.run()
print('Terminated')
Notes:
This is created in Python 3.4
I use Pycharm as my IDE.
One part of a whole.
2. So I'm creating a lightning detection system and this is how I expect it to be done:
- Listen to the port on the router forever
The above is done, but the issue with this is described in question 1.
- Pull numbers from a text file for sending text message
Completed this also.
- Send http get / post to port on the router
The issue with this is that i'm unsure how the router will act if I send this in binary form, I suspect it wont matter, the input commands for sending over GSM are specific. Some clarification may be needed at some point.
- Recieve reply from router and exception manage
- Listen for relay trip for alarm on severe or close strike warning.
- If tripped, send messages to phones in storage from text file
This would be the http get / post that's sent.
- Wait for reply from router to indicate messages have been sent, exception handle if it's not the case
- Go back to start
There are a few issues I'd like some background knowledge on that is proving hard to find via the old Google and here on the answers in stack.
How do I grab the receive data from the router from another process running in another file? I guess I can write into a text file and call that data but i'd rather not.
How to multi-process and which method to use.
How to send http get / post to socket on router, post needed occording to the router manual is as follows: e.g. "http://192.168.1.1/cgi-bin/sms_send?number=0037061212345&text=test"
Notes: Using Sockets, threading, sys and time on Python 3.4/Pycharm IDE.
Lightning detector used is LD-250 with RLO Relay attached.
RUT500 Teltonica router used.
Any direction/comments, errors spotted, anything i'm drastically missing would be greatly appreciated! Thank you very much in advance :D constructive criticism is greatly encouraged!
Okay so for the first part none of those suggested in the OP were my answer. Running the script as is from os.system(), exec() without declaring a new socket object just ran from __name__, this essentially just printed out "terminated", to get around this was simple. As everything was put into a classes already, all I had to do is create a new thread. This is how it was done:
import Socketthread2
new_thread = Socketthread2.Server() # Effectively declaring a new server class object.
new_thread.run()
This allowed the script to run from the beginning by initialising the code from the start in Socket, which is also a class of Clientthread, so that was also run too. Running this at the start of the parent program allowed this to run in the background, then continue with the new code in parent while the rest of the script was continuously active.

Resources