Unable to establish connection with Pymodbus TCPserver - python-3.x

I am setting up a new TCP Server connected to client through an Ethernet TCP/IP modbus and is supposed to push certain values to a given modbus register (hr = 6022), every few seconds. I do not see any exceptions/errors raised by the script but no data is received by the client. With a StartTCPserver command, I expected to see any network traffic (atleast the handshake) but I do not see any traffic on Wireshark. What could be the next possible diagnostic?
I have tried running similar script locally (without an external ethernet connection); One acting as a client and another as a server and did see the values update on the client register.
from pymodbus.server.sync import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
import time
import logging
FORMAT = ('%(asctime)-15s %(threadName)-15s'
' %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def run_server():
store = ModbusSlaveContext(
ir=ModbusSequentialDataBlock(6022, [152, 276]),
zero_mode=True
)
context = ModbusServerContext(slaves=store, single=True)
StartTcpServer(context, address=("192.168.10.2", 502))
if __name__ == "__main__":
run_server()

The lines after run_server() are never reached. Code connecting to the server can be placed in a different script;
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
cli = ModbusClient('192.168.10.2', port=502)
assert cli.connect()
res = cli.read_input_registers(6022, count=1, unit=1)
print(res.registers[0])

Related

How to catch and send the 'return' values of a function executed remotely in Python

I am trying to develop a client/server script to execute some Python code on a remote computer (for the sake or controlling some devices remotely.)
I am able to communicate and execute commands remotely (the dummy commands I listed on the server side.)
I have been trying to send feedback on the output of the commands sent to the server back to the client, but it seems I am unable to catch it:
Test client sending packets to IP 131.169.33.198, via port 5004
b'Dummy.one(1,2)'
b'Dummy.two(1)'
b'Dummy.three(1,2)'
so I would be grateful if someone could tell me how to catch the return values of the function executed remotely in order to send it back to the client. So far, when the server catches the three commands I send and executes them, I get the following result from print(command):
Test server listening on port 5004
None
None
None
I am providing with my MWE scripts below for both server and client - they should execute without exceptions.
One last question would be on how to efficiently close the ports before exiting the server, so that I do not find error messages regarding the port status.
Thanks in advance
Server script
# Server code
# -*- coding: utf-8 -*-
"""
Created on Fri May 20 11:40:03 2022
#author: strelok
"""
from socket import socket, gethostbyname, AF_INET, SOCK_DGRAM
import sys
import os
import time
class Dummy:
def one(arg1,arg2):
a = f"I got {arg1} and {arg2}"
return a
def two(arg1):
a = f"I got {arg1}"
# print(a)
return a
def three(arg1,arg2):
a = arg1 + arg2
# print(f"the answer is {a}")
return a
PORT_NUMBER = 5004
SIZE = 1024
hostName = gethostbyname( '0.0.0.0' )
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind( (hostName, PORT_NUMBER) )
print ("Test server listening on port {0}\n".format(PORT_NUMBER))
while True:
(data,addr) = mySocket.recvfrom(SIZE)
command = exec(str(data, encoding = 'utf-8'))
print(command)
# mySocket.send(bytes(command))
sys.exit()
Client script
# Client code
# -*- coding: utf-8 -*-
"""
Created on Fri May 20 11:34:32 2022
#author: strelok
"""
import sys
from socket import socket, AF_INET, SOCK_DGRAM
import time
def send_command(text):
message = bytes(text, encoding = 'utf-8')
print(message)
mySocket.send(message)
time.sleep(.5)
SERVER_IP = '131.169.33.198'
PORT_NUMBER = 5004
SIZE = 1024
print ("Test client sending packets to IP {0}, via port {1}\n".format(SERVER_IP, PORT_NUMBER))
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.connect((SERVER_IP,PORT_NUMBER))
# Sending the commands to be executed remotely
send_command("Dummy.one(1,2)")
send_command("Dummy.two(1)")
send_command("Dummy.three(1,2)")
sys.exit()

Python Test Code for URL on IP-Address only

I'm a bit lost in Google-results.
I have a webserver running through a Python built-in module.
This works and provides an area to download automation configuration for devices.
However, I like to build an availability test to provide better code.
If http service is not available, then proceed with tftp server.
I tested with urllib2, but then the website gets timed out. If I try the same code with a named uri 'www.google.com' then the response code 200 is provided.
Any thoughts to provide a good solution?
in the code I provide a ping test.
now like to add 'service-available' test. on TRUE - proceed with GET.
should work on IP-ADDRESS eg.: 10.1.1.1:80
Thank you for the help.
code ex:
import httplib2
h = httplib2.Http(".cache")
(resp_headers, content) = h.request("http://10.10.1.1:80/", "GET")
print(resp_headers)
print(content)
Try this. On a successful ping, the script will attempt to connect to an HTTP service. If the request times out or is not successful (e.g., status 404), it will call your TFTP function:
import shlex
import socket
import subprocess
import httplib2
ip_address = "10.1.1.1"
print("Attempting to ping...")
p = subprocess.Popen(shlex.split("ping -c 3 {0}".format(ip_address)), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, _ = p.communicate()
rc = p.returncode
if rc != 0 or "100% packet loss" in output.decode():
raise RuntimeError("Unable to connect to {0}.".format(ip_address))
print("Ping successful.")
print("Attempting to connect to an HTTP service....")
h = httplib2.Http(cache=".cache", timeout=10)
response = {}
content = ""
try:
response, content = h.request("http://{0}:80".format(ip_address), method="GET")
if 200 <= int(response["status"]) <= 299:
print("Connected to HTTP service.")
for key, value in response.items():
print(key, ":", value)
print(content.decode())
else:
print("Request not OK. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here
except socket.timeout:
print("Request timed out. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here

Unable to create OpenFlow13 message with Scapy

I am writing a code where I am capturing openflow13 packets using tcpdump and wireshark. I am running mininet topo and floodlight SDN controller. Once I get my SDN controller IP and port details from the capture, I intent to create multiple OFPTHello messages and send it to the SDN Controller [sort of DDoS attack]. Although I am able to extract the controller's details, I am unable to create Scapy OFPTHello message packets.
Request to please help me identify and resolve the issue
Mininet Topo I am running-
sudo mn --topo=linear,4 --mac --controller=remote,ip=192.168.56.102 --switch=ovsk,protocols=OpenFlow13
My Code-
#!/usr/bin/env python3
try:
import time
import subprocess
import json
import sys
from scapy.all import *
from scapy.contrib.openflow import _ofp_header
from scapy.fields import ByteEnumField, IntEnumField, IntField, LongField, PacketField, ShortField, XShortField
from scapy.layers.l2 import Ether
ofp_table = {0xfe: "MAX",
0xff: "ALL"}
ofp_buffer = {0xffffffff: "NO_BUFFER"}
ofp_version = {0x04: "OpenFlow 1.3"}
ofp_type = {0: "OFPT_HELLO"}
class OFPHET(_ofp_header):
#classmethod
def dispatch_hook(cls, _pkt=None, *args, **kargs):
if _pkt and len(_pkt) >= 2:
t = struct.unpack("!H", _pkt[:2])[0]
return ofp_hello_elem_cls.get(t, Raw)
return Raw
def extract_padding(self, s):
return b"", s
class OFPTHello(_ofp_header):
name = "OFPT_HELLO"
fields_desc = [ByteEnumField("version", 0x04, ofp_version),
ByteEnumField("type", 0, ofp_type),
ShortField("len", None),
IntField("xid", 0),
PacketListField("elements", [], OFPHET, length_from=lambda pkt: pkt.len - 8)]
# Capture controller's IP address and Port
Hello_Msg = []
Switch_TCP_Port = []
p = subprocess.Popen(['sudo', 'tcpdump', '-i', 'eth1', 'port', '6653', '-w', 'capture.pcap'], stdin=subprocess.PIPE, stdout=subprocess.DEVNULL, stderr=subprocess.STDOUT)
time.sleep(45)
p.terminate()
captures = rdpcap('capture.pcap')
for capture in captures:
msg = (capture.summary()).split(" ")
i = len(msg)
if (msg[i-1] == "OFPTFeaturesRequest"):
Features_Request = capture.summary()
break;
elif (msg[i-1] == "OFPTHello"):
Hello_Msg.append(capture.summary())
for Hello in Hello_Msg:
frame = Hello.split("/")[2]
port = ((frame.split(" ")[2]).split(":"))[1]
Switch_TCP_Port.append(port)
Features_Request = Features_Request.split("/")[2]
Source_Frame = (Features_Request.split(" ")[2]).split(":")
Controller_IP = Source_Frame[0]
Controller_Port = int(Source_Frame[1])
print("\nController's IP Address: %s"%Controller_IP)
print("Controller's Port: %s"%Controller_Port)
# Generating Openfow PAcket_In using Scapy
for p in Switch_TCP_Port:
p = int(p)
packet = Ether(src='08:00:27:fa:75:e9',dst='08:00:27:f1:24:22')/IP(src='192.168.56.101',dst=Controller_IP)/TCP(sport=p,dport=Controller_Port)/OFPTHello()
send(packet)
except ImportError as e:
print ("\n!!! ImportError !!!")
print ("{0}. Install it.\n".format(e))
Wireshark Capture- [Only has 4 hello packets, no Scapy packets are captured]
Question/Issue- I am able to receive the ideal number of 4 hello packets from the mininet topology. However, the new hello packets I am trying to create using scapy are not being sent/ captured by wireshark. I have attached my scapy code for reference.
In your code do this
modify the line:
send(packet)
To
send(packet,iface='eth1') where eth1 is the egress interface of the attacking VM
The reason is that even if a malformed Openflow packet is put on the wire, Wireshark will still be able to capture it, assuming your attack VM has a route to the controller VM. This means that your code is not putting the Packet on the right wire, send(packet,iface='eth1') will put it on the right wire.

How to update the value of pymodbus tcp server according to the message subscribed by zmq?

I am a newbie. My current project is when the current end decides to start the modbus service, I will create a process for the modbus service. Then the value is obtained in the parent process, through the ZeroMQ PUB/SUB to pass the value, I now want to update the value of the modbus register in the modbus service process.
I tried the method mentioned by pymodbus provided by updating_server.py, and twisted.internet.task.LoopingCall() to update the value of the register, but this will make it impossible for me to connect to my server with the client. I don't know why?
Use LoopingCall() to establish the server, the log when the client connects.
Then I tried to put both the uploading and startTCPserver in the async loop, but the update was only entered for the first time after the startup, and then it was not entered.
Currently, I'm using the LoopingCall() to handle updates, but I don't think this is a good way.
This is the code I initialized the PUB and all the tags that can read the tag.
from loop import cycle
import asyncio
from multiprocessing import Process
from persistence import models as pmodels
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
from zmq.asyncio import Context
from common import logging
from server.modbustcp import i3ot_tcp as sertcp
import common.config as cfg
import communication.admin as ca
import json
import os
import signal
from datetime import datetime
from server.opcuaserver import i3ot_opc as seropc
async def main():
future = []
task = []
global readers, readers_old, task_flag
logger.debug("connecting to database and create table.")
pmodels.connect_create()
logger.debug("init read all address to create loop task.")
cycle.init_readers(readers)
ctx = Context()
publisher = ctx.socket(zmq.PUB)
logger.debug("init publish [%s].", addrs)
publisher.bind(addrs)
readers_old = readers.copy()
for reader in readers:
task.append(asyncio.ensure_future(
cycle.run_readers(readers[reader], publisher)))
if not len(task):
task_flag = True
logger.debug("task length [%s - %s].", len(task), task)
opcua_server = LocalServer(seropc.opc_server, "opcua")
future = [
start_get_all_address(),
start_api(),
create_address_loop(publisher, task),
modbus_server(),
opcua_server.run()
]
logger.debug("run loop...")
await asyncio.gather(*future)
asyncio.run(main(), debug=False)
This is to get the device tag value and publish it.
async def run_readers(reader, publisher):
while True:
await reader.run(publisher)
class DataReader:
def __init__(self, freq, clients):
self._addresses = []
self._frequency = freq
self._stop_signal = False
self._clients = clients
self.signature = sign_data_reader(self._addresses)
async def run(self, publisher):
while not self._stop_signal:
for addr in self._addresses:
await addr.read()
data = {
"type": "value",
"data": addr._final_value
}
publisher.send_pyobj(data)
if addr._status:
if addr.alarm_log:
return_alarm_log = pbasic.get_log_by_time(addr.alarm_log['date'])
if return_alarm_log:
data = {
"type": "alarm",
"data": return_alarm_log
}
publisher.send_pyobj(data)
self.data_send(addr)
logger.debug("run send data")
await asyncio.sleep(int(self._frequency))
def stop(self):
self._stop_signal = True
modbus server imports
from common import logging
from pymodbus.server.asynchronous import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
import common.config as cfg
import struct
import os
import signal
from datetime import datetime
from twisted.internet.task import LoopingCall
def updating_writer(a):
logger.info("in updates of modbus tcp server.")
context = a[0]
# while True:
if check_pid(os.getppid()) is False:
os.kill(os.getpid(), signal.SIGKILL)
url = ("ipc://{}" .format(cfg.get('ipc', 'pubsub')))
logger.debug("connecting to [%s].", url)
ctx = zmq.Context()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(url)
subscriber.setsockopt(zmq.SUBSCRIBE, b"")
slave_id = 0x00
msg = subscriber.recv_pyobj()
logger.debug("updates.")
if msg['data']['data_type'] in modbus_server_type and msg['type'] == 'value':
addr = pservice.get_mbaddress_to_write_value(msg['data']['id'])
if addr:
logger.debug(
"local address and length [%s - %s].",
addr['local_address'], addr['length'])
values = get_value_by_type(msg['data']['data_type'], msg['data']['final'])
logger.debug("modbus server updates values [%s].", values)
register = get_register(addr['type'])
logger.debug(
"register [%d] local address [%d] and value [%s].",
register, addr['local_address'], values)
context[slave_id].setValues(register, addr['local_address'], values)
# time.sleep(1)
def tcp_server(pid):
logger.info("Get server configure and device's tags.")
st = datetime.now()
data = get_servie_and_all_tags()
if data:
logger.debug("register address space.")
register_address_space(data)
else:
logger.debug("no data to create address space.")
length = register_number()
store = ModbusSlaveContext(
di=ModbusSequentialDataBlock(0, [0] * length),
co=ModbusSequentialDataBlock(0, [0] * length),
hr=ModbusSequentialDataBlock(0, [0] * length),
ir=ModbusSequentialDataBlock(0, [0] * length)
)
context = ModbusServerContext(slaves=store, single=True)
identity = ModbusDeviceIdentification()
identity.VendorName = 'pymodbus'
identity.ProductCode = 'PM'
identity.VendorUrl = 'http://github.com/bashwork/pymodbus/'
identity.ProductName = 'pymodbus Server'
identity.ModelName = 'pymodbus Server'
identity.MajorMinorRevision = '2.2.0'
# ----------------------------------------------------------------------- #
# set loop call and run server
# ----------------------------------------------------------------------- #
try:
logger.debug("thread start.")
loop = LoopingCall(updating_writer, (context, ))
loop.start(1, now=False)
# process = Process(target=updating_writer, args=(context, os.getpid(),))
# process.start()
address = (data['tcp_ip'], int(data['tcp_port']))
nt = datetime.now() - st
logger.info("modbus tcp server begin has used [%s] s.", nt.seconds)
pservice.write_server_status_by_type('modbus', 'running')
StartTcpServer(context, identity=identity, address=address)
except Exception as e:
logger.debug("modbus server start error [%s].", e)
pservice.write_server_status_by_type('modbus', 'closed')
This is the code I created for the modbus process.
def process_stop(p_to_stop):
global ptcp_flag
pid = p_to_stop.pid
os.kill(pid, signal.SIGKILL)
logger.debug("process has closed.")
ptcp_flag = False
def ptcp_create():
global ptcp_flag
pid = os.getpid()
logger.debug("sentry pid [%s].", pid)
ptcp = Process(target=sertcp.tcp_server, args=(pid,))
ptcp_flag = True
return ptcp
async def modbus_server():
logger.debug("get mosbuc server's status.")
global ptcp_flag
name = 'modbus'
while True:
ser = pservice.get_server_status_by_name(name)
if ser['enabled']:
if ser['tcp_status'] == 'closed' or ser['tcp_status'] == 'running':
tags = pbasic.get_tag_by_name(name)
if len(tags):
if ptcp_flag is False:
logger.debug("[%s] status [%s].", ser['tcp_name'], ptcp_flag)
ptcp = ptcp_create()
ptcp.start()
else:
logger.debug("modbus server is running ...")
else:
logger.debug("no address to create [%s] server.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
else:
logger.debug("[%s] server is running ...", name)
else:
if ptcp_flag:
process_stop(ptcp)
logger.debug("[%s] has been closed.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
logger.debug("[%s] server not allowed to running.", name)
await asyncio.sleep(5)
This is the command that Docker runs.
/usr/bin/docker run --privileged --network host --name scout-sentry -v /etc/scout.cfg:/etc/scout.cfg -v /var/run:/var/run -v /sys:/sys -v /dev/mem:/dev/mem -v /var/lib/scout:/data --rm shulian/scout-sentry
This is the Docker configuration file /etc/scout.cfg.
[scout]
mode=product
[logging]
level=DEBUG
[db]
path=/data
[ipc]
cs=/var/run/scout-cs.sock
pubsub=/var/run/pubsub.sock
I want to be able to trigger the modbus value update function when there is a message coming from ZeroMQ, and it will be updated correctly.
Let's start from inside out.
Q : ...this will make it impossible for me to connect to my server with the client. I don't know why?
ZeroMQ is a smart broker-less messaging / signaling middleware or better a platform for smart-messaging. In case one feels not so much familiar with the art of Zen-of-Zero as present in ZeroMQ Architecture, one may like to start with ZeroMQ Principles in less than Five Seconds before diving into further details.
The Basis :
The Scalable Formal Communication Archetype, borrowed from ZeroMQ PUB/SUB, does not come at zero-cost.
This means that each infrastructure setup ( both on PUB-side and on SUB-side ) takes some, rather remarkable time and no one can be sure of when the AccessNode cnfiguration results in RTO-state. So the SUB-side (as proposed above) ought be either a permanent entity, or the user shall not expect to make it RTO in zero-time, after a twisted.internet.task.LoopingCall() gets reinstated.
Preferred way: instantiate your (semi-)persistent zmq.Context(), get it configured so as to serve the <aContextInstance>.socket( zmq.PUB ) as needed, a minimum safeguarding setup being the <aSocketInstance>.setsockopt( zmq.LINGER, 0 ) and all transport / queuing / security-handling details, that the exosystem exposes to your code ( whitelisting and secure sizing and resources protection being the most probable candidates - but details are related to your application domain and the risks that you are willing to face being prepared to handle them ).
ZeroMQ strongly discourages from sharing ( zero-sharing ) <aContextInstance>.socket()-instances, yet the zmq.Context()-instance can be shared / re-used (ref. ZeroMQ Principles... ) / passed to more than one threads ( if needed ).
All <aSocketInstance>{.bind()|.connect()}- methods are expensive, so try to setup the infrastructure AccessPoint(s) and their due error-handling way before one tries to use the their-mediated communication services.
Each <aSocketInstance>.setsockopt( zmq.SUBSCRIBE, ... ) is expensive in that it may take ( depending on (local/remote) version ) a form of a non-local, distributed-behaviour - local side "sets" the subscription, yet the remote side has to "be informed" about such state-change and "implements" the operations in line with the actual (propagated) state. While in earlier versions, all messages were dispatched from the PUB-side and all the SUB-side(s) were flooded with such data and were left for "filtering" which will be moved into a local-side internal-Queue, the newer versions "implement" the Topic-Filter on the PUB-side, which further increases the latency of setting the new modus-operandi in action.
Next comes the modus-operandi: how <aSocketInstance>.recv() gets results:
In their default API-state, .recv()-methods are blocking, potentially infinitely blocking, if no messages arrive.
Solution: avoid blocking-forms of calling ZeroMQ <aSocket>.recv()-methods by always using the zmq.NOBLOCK-modes thereof or rather test a presence or absence of any expected-message(s) with <aSocket>.poll( zmq.POLLIN, <timeout> )-methods available, with zero or controlled-timeouts. This makes you the master, who decides about the flow of code-execution. Not doing so, you knowingly let your code depend on external sequence ( or absence ) of events and your architecture is prone to awful problems with handling infinite blocking-states ( or potential unsalvageable many-agents' distributed behaviour live-locks or dead-locks )
Avoid uncontrolled cross-breeding of event-loops - like passing ZeroMQ-driven-loops into an external "callback"-alike handler or async-decorated code-blocks, where the stack of (non-)blocking logics may wreck havoc the original idea just by throwing the system into an unresolvable state, where events miss expected sequence of events and live-locks are unsalvagable or just the first pass happen to go through.
Stacking asyncio-code with twisted-LoopingCall()-s and async/await-decorated code + ZeroMQ blocking .recv()-s is either a Piece-of-Filligrane-Precise-Art-of-Truly-a-Zen-Master, or a sure ticket to Hell - with all respect to the Art-of-Truly-Zen-Masters :o)
So, yes, complex thinking is needed -- welcome to the realms of distributed-computing!

subprocess stdout into a variable for Ngrok

I'm trying to make a python script wich reports me the port on 0.tcp.ngrok.io is started when I run the code on terminal (after moving ngrok executable file to /usr/local/bin)
ngrok tcp 22
I get ths kind of output
ngrok by #inconshreveable (Ctrl+C to quit)
Session Status connecting
Version 2.2.4
Region United States (us)
Web Interface http://127.0.0.1:4041
Forwarding tcp://0.tcp.ngrok.io:13014 -> localhost:22
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
My first attempt is to log the subprocess stdout to a variable , but as the stdout is cyclic the stdout.read() never ends this is the code
import subprocess
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
output_text = ngrok.stdout.read() # script stops here forever
[**code for getting domain:port from output_text**]
how can I get a "snapshot" of stdout to a variable , without stoping ngrok?
Is there another way of doing this (next try would be a webscraper on localhost , but it would be nice to have this knowledge for other commands , such as "top")
thanks in advance
I had the same issue when I was working with ngrok http and all the alternatives weren't working, resulting in deadlocks and I even could print the child process response created with ngrok. Thus, reading the ngrok docs I notice that there is a way to get the ngrok public url with requests.
Adding the code below:
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
So, tunnel_url will return what you need. Adding the imports the full code would be like this:
import subprocess
import requests
import json
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
not enough reputation to comment, feel free to update or comment on #Rodolfo great answer and then delete this
Perhaps they've changed the api slightly, this is what worked for me:
(ngrok executable next to the script, serving http on port 5000 and picking the https tunneling url)
import subprocess
import requests
import json
import time
if __name__ == '__main__':
ngrok = subprocess.Popen(['./ngrok','http','5000'],
stdout=subprocess.PIPE)
time.sleep(3) # to allow the ngrok to fetch the url from the server
localhost_url = "http://localhost:4040/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['tunnels'][1]['public_url'] #Do the parsing of the get
print(tunnel_url)
This is working for me:
def open_tunnel():
process = subprocess.Popen(f'/snap/bin/ngrok tcp {PORT} --log "stdout"', shell=True, stdout=PIPE, stderr=PIPE) # You can also use a list and put shell = False
while True:
output = process.stdout.readline()
if not output and process.poll() is not None:
break
elif b'url=' in output:
output = output.decode()
output = output[output.index('url=tcp://') + 10 : -1]
return output.split(':')
I use /snap/bin/ngrok because my pycharm does not recognize path, sorry for that. You can replace it by saying only ngrok
.

Resources