Python Threading with Selenium - multithreading

I'm attempting to use Splinter (a package for Selenium) in multiple instances. However, instances aren't starting until the first thread completely finishes. So, the browser instance opens up, loads the page, sleeps, and only then does the 2nd thread start.
I need to run instances at the same time.
import threading
import time
from splinter import Browser
def worker(proxy, port):
proxy_settings = {"network.proxy.type": 1,
"network.proxy.ssl": proxy,
"network.proxy.ssl_port": port,
"network.proxy.socks": proxy,
"network.proxy.socks_port": port,
"network.proxy.socks_remote_dns": True,
"network.proxy.ftp": proxy,
"network.proxy.ftp_port": port
}
browser = Browser('firefox',
profile_preferences=proxy_settings,
capabilities={'pageLoadStrategy': 'eager'}) #eager or normal
print("Proxy: ", proxy, ":", proxy)
browser.visit("https://mxtoolbox.com/whatismyip/?" + proxy)
time.sleep(2)
ip1 = '22.22.222.222'
ip2 = '222.222.22.222'
p1 = int(2222)
p2 = int(2222)
p = []
p.append((ip1,p1))
p.append((ip2,p2))
x = 0
for pp in p:
threading.Thread(target=worker(pp[0], pp[1])).start()
In my longer code (the above is my attempt to figure out why I can't multi-thread) Im also getting an error in my editor that
Local variable 'browser' value is not used

That's because you're not starting a thread with the worker function, actually the last line should rather look like:
threading.Thread(target=worker, args=(pp[0], pp[1])).start()
As for your editor issue, I'd say it is editor dependent and without any information, it's hard to say (I'd mention that running pylint does not indicate such warning)

Related

How to update the value of pymodbus tcp server according to the message subscribed by zmq?

I am a newbie. My current project is when the current end decides to start the modbus service, I will create a process for the modbus service. Then the value is obtained in the parent process, through the ZeroMQ PUB/SUB to pass the value, I now want to update the value of the modbus register in the modbus service process.
I tried the method mentioned by pymodbus provided by updating_server.py, and twisted.internet.task.LoopingCall() to update the value of the register, but this will make it impossible for me to connect to my server with the client. I don't know why?
Use LoopingCall() to establish the server, the log when the client connects.
Then I tried to put both the uploading and startTCPserver in the async loop, but the update was only entered for the first time after the startup, and then it was not entered.
Currently, I'm using the LoopingCall() to handle updates, but I don't think this is a good way.
This is the code I initialized the PUB and all the tags that can read the tag.
from loop import cycle
import asyncio
from multiprocessing import Process
from persistence import models as pmodels
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
from zmq.asyncio import Context
from common import logging
from server.modbustcp import i3ot_tcp as sertcp
import common.config as cfg
import communication.admin as ca
import json
import os
import signal
from datetime import datetime
from server.opcuaserver import i3ot_opc as seropc
async def main():
future = []
task = []
global readers, readers_old, task_flag
logger.debug("connecting to database and create table.")
pmodels.connect_create()
logger.debug("init read all address to create loop task.")
cycle.init_readers(readers)
ctx = Context()
publisher = ctx.socket(zmq.PUB)
logger.debug("init publish [%s].", addrs)
publisher.bind(addrs)
readers_old = readers.copy()
for reader in readers:
task.append(asyncio.ensure_future(
cycle.run_readers(readers[reader], publisher)))
if not len(task):
task_flag = True
logger.debug("task length [%s - %s].", len(task), task)
opcua_server = LocalServer(seropc.opc_server, "opcua")
future = [
start_get_all_address(),
start_api(),
create_address_loop(publisher, task),
modbus_server(),
opcua_server.run()
]
logger.debug("run loop...")
await asyncio.gather(*future)
asyncio.run(main(), debug=False)
This is to get the device tag value and publish it.
async def run_readers(reader, publisher):
while True:
await reader.run(publisher)
class DataReader:
def __init__(self, freq, clients):
self._addresses = []
self._frequency = freq
self._stop_signal = False
self._clients = clients
self.signature = sign_data_reader(self._addresses)
async def run(self, publisher):
while not self._stop_signal:
for addr in self._addresses:
await addr.read()
data = {
"type": "value",
"data": addr._final_value
}
publisher.send_pyobj(data)
if addr._status:
if addr.alarm_log:
return_alarm_log = pbasic.get_log_by_time(addr.alarm_log['date'])
if return_alarm_log:
data = {
"type": "alarm",
"data": return_alarm_log
}
publisher.send_pyobj(data)
self.data_send(addr)
logger.debug("run send data")
await asyncio.sleep(int(self._frequency))
def stop(self):
self._stop_signal = True
modbus server imports
from common import logging
from pymodbus.server.asynchronous import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
import common.config as cfg
import struct
import os
import signal
from datetime import datetime
from twisted.internet.task import LoopingCall
def updating_writer(a):
logger.info("in updates of modbus tcp server.")
context = a[0]
# while True:
if check_pid(os.getppid()) is False:
os.kill(os.getpid(), signal.SIGKILL)
url = ("ipc://{}" .format(cfg.get('ipc', 'pubsub')))
logger.debug("connecting to [%s].", url)
ctx = zmq.Context()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(url)
subscriber.setsockopt(zmq.SUBSCRIBE, b"")
slave_id = 0x00
msg = subscriber.recv_pyobj()
logger.debug("updates.")
if msg['data']['data_type'] in modbus_server_type and msg['type'] == 'value':
addr = pservice.get_mbaddress_to_write_value(msg['data']['id'])
if addr:
logger.debug(
"local address and length [%s - %s].",
addr['local_address'], addr['length'])
values = get_value_by_type(msg['data']['data_type'], msg['data']['final'])
logger.debug("modbus server updates values [%s].", values)
register = get_register(addr['type'])
logger.debug(
"register [%d] local address [%d] and value [%s].",
register, addr['local_address'], values)
context[slave_id].setValues(register, addr['local_address'], values)
# time.sleep(1)
def tcp_server(pid):
logger.info("Get server configure and device's tags.")
st = datetime.now()
data = get_servie_and_all_tags()
if data:
logger.debug("register address space.")
register_address_space(data)
else:
logger.debug("no data to create address space.")
length = register_number()
store = ModbusSlaveContext(
di=ModbusSequentialDataBlock(0, [0] * length),
co=ModbusSequentialDataBlock(0, [0] * length),
hr=ModbusSequentialDataBlock(0, [0] * length),
ir=ModbusSequentialDataBlock(0, [0] * length)
)
context = ModbusServerContext(slaves=store, single=True)
identity = ModbusDeviceIdentification()
identity.VendorName = 'pymodbus'
identity.ProductCode = 'PM'
identity.VendorUrl = 'http://github.com/bashwork/pymodbus/'
identity.ProductName = 'pymodbus Server'
identity.ModelName = 'pymodbus Server'
identity.MajorMinorRevision = '2.2.0'
# ----------------------------------------------------------------------- #
# set loop call and run server
# ----------------------------------------------------------------------- #
try:
logger.debug("thread start.")
loop = LoopingCall(updating_writer, (context, ))
loop.start(1, now=False)
# process = Process(target=updating_writer, args=(context, os.getpid(),))
# process.start()
address = (data['tcp_ip'], int(data['tcp_port']))
nt = datetime.now() - st
logger.info("modbus tcp server begin has used [%s] s.", nt.seconds)
pservice.write_server_status_by_type('modbus', 'running')
StartTcpServer(context, identity=identity, address=address)
except Exception as e:
logger.debug("modbus server start error [%s].", e)
pservice.write_server_status_by_type('modbus', 'closed')
This is the code I created for the modbus process.
def process_stop(p_to_stop):
global ptcp_flag
pid = p_to_stop.pid
os.kill(pid, signal.SIGKILL)
logger.debug("process has closed.")
ptcp_flag = False
def ptcp_create():
global ptcp_flag
pid = os.getpid()
logger.debug("sentry pid [%s].", pid)
ptcp = Process(target=sertcp.tcp_server, args=(pid,))
ptcp_flag = True
return ptcp
async def modbus_server():
logger.debug("get mosbuc server's status.")
global ptcp_flag
name = 'modbus'
while True:
ser = pservice.get_server_status_by_name(name)
if ser['enabled']:
if ser['tcp_status'] == 'closed' or ser['tcp_status'] == 'running':
tags = pbasic.get_tag_by_name(name)
if len(tags):
if ptcp_flag is False:
logger.debug("[%s] status [%s].", ser['tcp_name'], ptcp_flag)
ptcp = ptcp_create()
ptcp.start()
else:
logger.debug("modbus server is running ...")
else:
logger.debug("no address to create [%s] server.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
else:
logger.debug("[%s] server is running ...", name)
else:
if ptcp_flag:
process_stop(ptcp)
logger.debug("[%s] has been closed.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
logger.debug("[%s] server not allowed to running.", name)
await asyncio.sleep(5)
This is the command that Docker runs.
/usr/bin/docker run --privileged --network host --name scout-sentry -v /etc/scout.cfg:/etc/scout.cfg -v /var/run:/var/run -v /sys:/sys -v /dev/mem:/dev/mem -v /var/lib/scout:/data --rm shulian/scout-sentry
This is the Docker configuration file /etc/scout.cfg.
[scout]
mode=product
[logging]
level=DEBUG
[db]
path=/data
[ipc]
cs=/var/run/scout-cs.sock
pubsub=/var/run/pubsub.sock
I want to be able to trigger the modbus value update function when there is a message coming from ZeroMQ, and it will be updated correctly.
Let's start from inside out.
Q : ...this will make it impossible for me to connect to my server with the client. I don't know why?
ZeroMQ is a smart broker-less messaging / signaling middleware or better a platform for smart-messaging. In case one feels not so much familiar with the art of Zen-of-Zero as present in ZeroMQ Architecture, one may like to start with ZeroMQ Principles in less than Five Seconds before diving into further details.
The Basis :
The Scalable Formal Communication Archetype, borrowed from ZeroMQ PUB/SUB, does not come at zero-cost.
This means that each infrastructure setup ( both on PUB-side and on SUB-side ) takes some, rather remarkable time and no one can be sure of when the AccessNode cnfiguration results in RTO-state. So the SUB-side (as proposed above) ought be either a permanent entity, or the user shall not expect to make it RTO in zero-time, after a twisted.internet.task.LoopingCall() gets reinstated.
Preferred way: instantiate your (semi-)persistent zmq.Context(), get it configured so as to serve the <aContextInstance>.socket( zmq.PUB ) as needed, a minimum safeguarding setup being the <aSocketInstance>.setsockopt( zmq.LINGER, 0 ) and all transport / queuing / security-handling details, that the exosystem exposes to your code ( whitelisting and secure sizing and resources protection being the most probable candidates - but details are related to your application domain and the risks that you are willing to face being prepared to handle them ).
ZeroMQ strongly discourages from sharing ( zero-sharing ) <aContextInstance>.socket()-instances, yet the zmq.Context()-instance can be shared / re-used (ref. ZeroMQ Principles... ) / passed to more than one threads ( if needed ).
All <aSocketInstance>{.bind()|.connect()}- methods are expensive, so try to setup the infrastructure AccessPoint(s) and their due error-handling way before one tries to use the their-mediated communication services.
Each <aSocketInstance>.setsockopt( zmq.SUBSCRIBE, ... ) is expensive in that it may take ( depending on (local/remote) version ) a form of a non-local, distributed-behaviour - local side "sets" the subscription, yet the remote side has to "be informed" about such state-change and "implements" the operations in line with the actual (propagated) state. While in earlier versions, all messages were dispatched from the PUB-side and all the SUB-side(s) were flooded with such data and were left for "filtering" which will be moved into a local-side internal-Queue, the newer versions "implement" the Topic-Filter on the PUB-side, which further increases the latency of setting the new modus-operandi in action.
Next comes the modus-operandi: how <aSocketInstance>.recv() gets results:
In their default API-state, .recv()-methods are blocking, potentially infinitely blocking, if no messages arrive.
Solution: avoid blocking-forms of calling ZeroMQ <aSocket>.recv()-methods by always using the zmq.NOBLOCK-modes thereof or rather test a presence or absence of any expected-message(s) with <aSocket>.poll( zmq.POLLIN, <timeout> )-methods available, with zero or controlled-timeouts. This makes you the master, who decides about the flow of code-execution. Not doing so, you knowingly let your code depend on external sequence ( or absence ) of events and your architecture is prone to awful problems with handling infinite blocking-states ( or potential unsalvageable many-agents' distributed behaviour live-locks or dead-locks )
Avoid uncontrolled cross-breeding of event-loops - like passing ZeroMQ-driven-loops into an external "callback"-alike handler or async-decorated code-blocks, where the stack of (non-)blocking logics may wreck havoc the original idea just by throwing the system into an unresolvable state, where events miss expected sequence of events and live-locks are unsalvagable or just the first pass happen to go through.
Stacking asyncio-code with twisted-LoopingCall()-s and async/await-decorated code + ZeroMQ blocking .recv()-s is either a Piece-of-Filligrane-Precise-Art-of-Truly-a-Zen-Master, or a sure ticket to Hell - with all respect to the Art-of-Truly-Zen-Masters :o)
So, yes, complex thinking is needed -- welcome to the realms of distributed-computing!

How i can get new ip from tor every requests in threads?

I try to use TOR proxy for scraping and everything works fine in one thread, but this is slow.
I try to do something simple:
def get_new_ip():
with Controller.from_port(port = 9051) as controller:
controller.authenticate(password="password")
controller.signal(Signal.NEWNYM)
time.sleep(controller.get_newnym_wait())
def check_ip():
get_new_ip()
session = requests.session()
session.proxies = {'http': 'socks5h://localhost:9050', 'https': 'socks5h://localhost:9050'}
r = session.get('http://httpbin.org/ip')
r.text
with Pool(processes=3) as pool:
for _ in range(9):
pool.apply_async(check_ip)
pool.close()
pool.join()
When I run it, I see the output:
{"origin": "95.179.181.1, 95.179.181.1"}
{"origin": "95.179.181.1, 95.179.181.1"}
{"origin": "95.179.181.1, 95.179.181.1"}
{"origin": "151.80.53.232, 151.80.53.232"}
{"origin": "151.80.53.232, 151.80.53.232"}
{"origin": "151.80.53.232, 151.80.53.232"}
{"origin": "145.239.169.47, 145.239.169.47"}
{"origin": "145.239.169.47, 145.239.169.47"}
{"origin": "145.239.169.47, 145.239.169.47"}
Why is this happening and how do I give each thread its own IP?
By the way, I tried libraries like TorRequests, TorCtl the result is the same.
I understand that it appears that TOR has a delay before issuing a new IP, but why do the same IP get into different processes?
If you want different IPs for each connection, you can also use Stream Isolation over SOCKS by specifying a different proxy username:password combination for each connection.
With this method, you only need one Tor instance and each requests client can use a different stream with a different exit node.
In order to set this up, add unique proxy credentials for each requests.session object like so: socks5h://username:password#localhost:9050
import random
from multiprocessing import Pool
import requests
def check_ip():
session = requests.session()
creds = str(random.randint(10000,0x7fffffff)) + ":" + "foobar"
session.proxies = {'http': 'socks5h://{}#localhost:9050'.format(creds), 'https': 'socks5h://{}#localhost:9050'.format(creds)}
r = session.get('http://httpbin.org/ip')
print(r.text)
with Pool(processes=8) as pool:
for _ in range(9):
pool.apply_async(check_ip)
pool.close()
pool.join()
Tor Browser isolates streams on a per-domain basis by setting the credentials to firstpartydomain:randompassword, where randompassword is a random nonce for each unique first party domain.
If you're crawling the same site and you want random IP's, then use a random username:password combination for each session. If you are crawling random domains and want to use the same circuit for requests to a domain, use Tor Browser's method of domain:randompassword for credentials.
You only have one proxy, which is listening on the port 9050. All 3 processes are sending requests in parallel through that proxy so they share the same IP.
What is happening is:
All 3 processes ask the proxy to get a new IP
The proxy either request a new IP 3 times, receive 3 responses and apply the last one or it will recognize that it is already waiting for a new IP and disregard 2 of the requests, answering the 3 of them together. That will depend on the proxy implementation.
The processes send their requests through the proxy, which results in the same IP.
The processes are completed and another 3 processes are initiated. Rinse and repeat.
That is why the IPs are the same for every block of 3 requests.
You'll need 3 independent proxies to have 3 different IPs at the same time.
EDIT:
Possible solution using locks and assuming 3 proxies running on the background:
import contextlib, threading, time
_controller_ports = [
# (Controller Lock, connection port, management port)
(threading.Lock(), 9050, 9051),
(threading.Lock(), 9060, 9061),
(threading.Lock(), 9070, 9071),
]
def get_new_ip_for(port):
with Controller.from_port(port=port) as controller:
controller.authenticate(password="password")
controller.signal(Signal.NEWNYM)
time.sleep(controller.get_newnym_wait())
#contextlib.contextmanager
def get_port_with_new_ip():
while True:
for lock, con_port, manage_port in _controller_ports:
if lock.acquire(blocking=False):
get_new_ip_for(manage_port)
yield con_port
lock.release()
break
time.sleep(1)
def check_ip():
with get_port_with_new_ip() as port:
session = requests.session()
session.proxies = {'http': f'socks5h://localhost:{port}', 'https': f'socks5h://localhost:{port}'}
r = session.get('http://httpbin.org/ip')
print(r.text)
with Pool(processes=3) as pool:
for _ in range(9):
pool.apply_async(check_ip)
pool.close()
pool.join()

Mitmproxy how to launch from script, and save dumps to file

I am trying to figure out a way to launch Mitmproxy from a python script (which I have done) and save any traffic to a dump file (which i need help with).
By googling, looking at mitmproxy git issues and reading example code, this is what I have so far:
from mitmproxy import proxy, options
from mitmproxy.tools.dump import DumpMaster
from mitmproxy.addons import core
class AddHeader:
def __init__(self):
self.num = 0
def response(self, flow):
self.num = self.num + 1
print(self.num)
flow.response.headers["count"] = str(self.num)
addons = [
AddHeader()
]
opts = options.Options(listen_host='127.0.0.1', listen_port=8080)
pconf = proxy.config.ProxyConfig(opts)
m = DumpMaster(None)
m.server = proxy.server.ProxyServer(pconf)
# print(m.addons)
m.addons.add(addons)
print(m.addons)
# m.addons.add(core.Core())
try:
m.run()
except KeyboardInterrupt:
m.shutdown()
Issue is, this creates an error AttributeError: No such option: body_size_limit which seems to be mitigated with master.addons.add(core.Core) but this core addon already exists in DumpMaster so that fires a different error.
Inspecting the addons that are currently loaded by DumpMaster i do see the save to file addon is loaded, but I am not clear how to access that so that any traffic that is going through the proxy, regardless if it is request, response, ws or tcp can be written to a dump file
Thanks!
Here is a redacted list of the addons that are loaded
mitmproxy.addons.streambodies.StreamBodies object at 0x111542da0>
mitmproxy.addons.save.Save object at 0x111542dd8>
mitmproxy.addons.upstream_auth.UpstreamAuth object at 0x111542e10>
just add those 2 lines after opts = options.Options(listen_host='127.0.0.1', listen_port=8080)
opts.add_option("body_size_limit", int, 0, "")
opts.add_option("keep_host_header", bool, True, "")
your code snippet already runs a working proxy. However, the option to dump the recorded traffic into a file during runtime (save_stream_file) is part of the Save-Addon which is loaded by default after the DumpMaster instance is created. Therefore, you need to set the save_stream_file option after creating the DumpMaster instance. Took me a while to figure it out as well but this worked for me, saving your output stream to a file named traffic_stream:
from mitmproxy import proxy, options
from mitmproxy.tools.dump import DumpMaster
opts = options.Options(listen_port=8081)
opts.add_option("body_size_limit", int, 0, "")
pconf = proxy.config.ProxyConfig(opts)
m = DumpMaster(None)
m.server = proxy.server.ProxyServer(pconf)
m.options.set('save_stream_file=traffic_stream')
try:
m.run()
except KeyboardInterrupt:
m.shutdown()
Hope it works for you as well!

subprocess stdout into a variable for Ngrok

I'm trying to make a python script wich reports me the port on 0.tcp.ngrok.io is started when I run the code on terminal (after moving ngrok executable file to /usr/local/bin)
ngrok tcp 22
I get ths kind of output
ngrok by #inconshreveable (Ctrl+C to quit)
Session Status connecting
Version 2.2.4
Region United States (us)
Web Interface http://127.0.0.1:4041
Forwarding tcp://0.tcp.ngrok.io:13014 -> localhost:22
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
My first attempt is to log the subprocess stdout to a variable , but as the stdout is cyclic the stdout.read() never ends this is the code
import subprocess
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
output_text = ngrok.stdout.read() # script stops here forever
[**code for getting domain:port from output_text**]
how can I get a "snapshot" of stdout to a variable , without stoping ngrok?
Is there another way of doing this (next try would be a webscraper on localhost , but it would be nice to have this knowledge for other commands , such as "top")
thanks in advance
I had the same issue when I was working with ngrok http and all the alternatives weren't working, resulting in deadlocks and I even could print the child process response created with ngrok. Thus, reading the ngrok docs I notice that there is a way to get the ngrok public url with requests.
Adding the code below:
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
So, tunnel_url will return what you need. Adding the imports the full code would be like this:
import subprocess
import requests
import json
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
not enough reputation to comment, feel free to update or comment on #Rodolfo great answer and then delete this
Perhaps they've changed the api slightly, this is what worked for me:
(ngrok executable next to the script, serving http on port 5000 and picking the https tunneling url)
import subprocess
import requests
import json
import time
if __name__ == '__main__':
ngrok = subprocess.Popen(['./ngrok','http','5000'],
stdout=subprocess.PIPE)
time.sleep(3) # to allow the ngrok to fetch the url from the server
localhost_url = "http://localhost:4040/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['tunnels'][1]['public_url'] #Do the parsing of the get
print(tunnel_url)
This is working for me:
def open_tunnel():
process = subprocess.Popen(f'/snap/bin/ngrok tcp {PORT} --log "stdout"', shell=True, stdout=PIPE, stderr=PIPE) # You can also use a list and put shell = False
while True:
output = process.stdout.readline()
if not output and process.poll() is not None:
break
elif b'url=' in output:
output = output.decode()
output = output[output.index('url=tcp://') + 10 : -1]
return output.split(':')
I use /snap/bin/ngrok because my pycharm does not recognize path, sorry for that. You can replace it by saying only ngrok
.

Python 3.4 - How to 'run' another script python script continuously, How to pass http get / post to socket

This question is two-fold.
1. So I need to run code for a socket server that's all defined and created in another.py, Clicking run on PyCharm works just fine, but if you exec() the file it just runs the bottom part of the code.
There are a few answers here but they are conflicting and for Python 2.
From what I can gather there are three ways:
- Execfile(), Which I think is Python 2 code.
- os.system() (But I've seen it be said that it's not correct to pass to the OS for this)
- And subprocess.Popen (unsure how to use this either)
I need this to run in the background, it is used to create threads for sockets for the recv portion of the overall program and listen on those ports so I can input commands to a router.
This is the complete code in question:
import sys
import socket
import threading
import time
QUIT = False
class ClientThread(threading.Thread): # Class that implements the client threads in this server
def __init__(self, client_sock): # Initialize the object, save the socket that this thread will use.
threading.Thread.__init__(self)
self.client = client_sock
def run(self): # Thread's main loop. Once this function returns, the thread is finished and dies.
global QUIT # Need to declare QUIT as global, since the method can change it
done = False
cmd = self.readline() # Read data from the socket and process it
while not done:
if 'quit' == cmd:
self.writeline('Ok, bye. Server shut down')
QUIT = True
done = True
elif 'bye' == cmd:
self.writeline('Ok, bye. Thread closed')
done = True
else:
self.writeline(self.name)
cmd = self.readline()
self.client.close() # Make sure socket is closed when we're done with it
return
def readline(self): # Helper function, read up to 1024 chars from the socket, and returns them as a string
result = self.client.recv(1024)
if result is not None: # All letters in lower case and without and end of line markers
result = result.strip().lower().decode('ascii')
return result
def writeline(self, text): # Helper func, writes the given string to the socket with and end of line marker at end
self.client.send(text.strip().encode("ascii") + b'\n')
class Server: # Server class. Opens up a socket and listens for incoming connections.
def __init__(self): # Every time a new connection arrives, new thread object is created and
self.sock = None # defers the processing of the connection to it
self.thread_list = []
def run(self): # Server main loop: Creates the server (incoming) socket, listens > creates thread to handle it
all_good = False
try_count = 0 # Attempt to open the socket
while not all_good:
if 3 < try_count: # Tried more than 3 times without success, maybe post is in use by another program
sys.exit(1)
try:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create the socket
port = 80
self.sock.bind(('127.0.0.1', port)) # Bind to the interface and port we want to listen on
self.sock.listen(5)
all_good = True
break
except socket.error:
print('Socket connection error... Waiting 10 seconds to retry.')
del self.sock
time.sleep(10)
try_count += 1
print('Server is listening for incoming connections.')
print('Try to connect through the command line with:')
print('telnet localhost 80')
print('and then type whatever you want.')
print()
print("typing 'bye' finishes the thread. but not the server",)
print("eg. you can quit telnet, run it again and get a different ",)
print("thread name")
print("typing 'quit' finishes the server")
try:
while not QUIT:
try:
self.sock.settimeout(0.500)
client = self.sock.accept()[0]
except socket.timeout:
time.sleep(1)
if QUIT:
print('Received quit command. Shutting down...')
break
continue
new_thread = ClientThread(client)
print('Incoming Connection. Started thread ',)
print(new_thread.getName())
self.thread_list.append(new_thread)
new_thread.start()
for thread in self.thread_list:
if not thread.isAlive():
self.thread_list.remove(thread)
thread.join()
except KeyboardInterrupt:
print('Ctrl+C pressed... Shutting Down')
except Exception as err:
print('Exception caught: %s\nClosing...' % err)
for thread in self.thread_list:
thread.join(1.0)
self.sock.close()
if "__main__" == __name__:
server = Server()
server.run()
print('Terminated')
Notes:
This is created in Python 3.4
I use Pycharm as my IDE.
One part of a whole.
2. So I'm creating a lightning detection system and this is how I expect it to be done:
- Listen to the port on the router forever
The above is done, but the issue with this is described in question 1.
- Pull numbers from a text file for sending text message
Completed this also.
- Send http get / post to port on the router
The issue with this is that i'm unsure how the router will act if I send this in binary form, I suspect it wont matter, the input commands for sending over GSM are specific. Some clarification may be needed at some point.
- Recieve reply from router and exception manage
- Listen for relay trip for alarm on severe or close strike warning.
- If tripped, send messages to phones in storage from text file
This would be the http get / post that's sent.
- Wait for reply from router to indicate messages have been sent, exception handle if it's not the case
- Go back to start
There are a few issues I'd like some background knowledge on that is proving hard to find via the old Google and here on the answers in stack.
How do I grab the receive data from the router from another process running in another file? I guess I can write into a text file and call that data but i'd rather not.
How to multi-process and which method to use.
How to send http get / post to socket on router, post needed occording to the router manual is as follows: e.g. "http://192.168.1.1/cgi-bin/sms_send?number=0037061212345&text=test"
Notes: Using Sockets, threading, sys and time on Python 3.4/Pycharm IDE.
Lightning detector used is LD-250 with RLO Relay attached.
RUT500 Teltonica router used.
Any direction/comments, errors spotted, anything i'm drastically missing would be greatly appreciated! Thank you very much in advance :D constructive criticism is greatly encouraged!
Okay so for the first part none of those suggested in the OP were my answer. Running the script as is from os.system(), exec() without declaring a new socket object just ran from __name__, this essentially just printed out "terminated", to get around this was simple. As everything was put into a classes already, all I had to do is create a new thread. This is how it was done:
import Socketthread2
new_thread = Socketthread2.Server() # Effectively declaring a new server class object.
new_thread.run()
This allowed the script to run from the beginning by initialising the code from the start in Socket, which is also a class of Clientthread, so that was also run too. Running this at the start of the parent program allowed this to run in the background, then continue with the new code in parent while the rest of the script was continuously active.

Resources