I'm trying to make a python script wich reports me the port on 0.tcp.ngrok.io is started when I run the code on terminal (after moving ngrok executable file to /usr/local/bin)
ngrok tcp 22
I get ths kind of output
ngrok by #inconshreveable (Ctrl+C to quit)
Session Status connecting
Version 2.2.4
Region United States (us)
Web Interface http://127.0.0.1:4041
Forwarding tcp://0.tcp.ngrok.io:13014 -> localhost:22
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
My first attempt is to log the subprocess stdout to a variable , but as the stdout is cyclic the stdout.read() never ends this is the code
import subprocess
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
output_text = ngrok.stdout.read() # script stops here forever
[**code for getting domain:port from output_text**]
how can I get a "snapshot" of stdout to a variable , without stoping ngrok?
Is there another way of doing this (next try would be a webscraper on localhost , but it would be nice to have this knowledge for other commands , such as "top")
thanks in advance
I had the same issue when I was working with ngrok http and all the alternatives weren't working, resulting in deadlocks and I even could print the child process response created with ngrok. Thus, reading the ngrok docs I notice that there is a way to get the ngrok public url with requests.
Adding the code below:
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
So, tunnel_url will return what you need. Adding the imports the full code would be like this:
import subprocess
import requests
import json
ngrok = subprocess.Popen(['ngrok','tcp','22'],stdout = subprocess.PIPE)
localhost_url = "http://localhost:4041/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['Tunnels'][0]['PublicUrl'] #Do the parsing of the get
not enough reputation to comment, feel free to update or comment on #Rodolfo great answer and then delete this
Perhaps they've changed the api slightly, this is what worked for me:
(ngrok executable next to the script, serving http on port 5000 and picking the https tunneling url)
import subprocess
import requests
import json
import time
if __name__ == '__main__':
ngrok = subprocess.Popen(['./ngrok','http','5000'],
stdout=subprocess.PIPE)
time.sleep(3) # to allow the ngrok to fetch the url from the server
localhost_url = "http://localhost:4040/api/tunnels" #Url with tunnel details
tunnel_url = requests.get(localhost_url).text #Get the tunnel information
j = json.loads(tunnel_url)
tunnel_url = j['tunnels'][1]['public_url'] #Do the parsing of the get
print(tunnel_url)
This is working for me:
def open_tunnel():
process = subprocess.Popen(f'/snap/bin/ngrok tcp {PORT} --log "stdout"', shell=True, stdout=PIPE, stderr=PIPE) # You can also use a list and put shell = False
while True:
output = process.stdout.readline()
if not output and process.poll() is not None:
break
elif b'url=' in output:
output = output.decode()
output = output[output.index('url=tcp://') + 10 : -1]
return output.split(':')
I use /snap/bin/ngrok because my pycharm does not recognize path, sorry for that. You can replace it by saying only ngrok
.
Related
I am trying to develop a client/server script to execute some Python code on a remote computer (for the sake or controlling some devices remotely.)
I am able to communicate and execute commands remotely (the dummy commands I listed on the server side.)
I have been trying to send feedback on the output of the commands sent to the server back to the client, but it seems I am unable to catch it:
Test client sending packets to IP 131.169.33.198, via port 5004
b'Dummy.one(1,2)'
b'Dummy.two(1)'
b'Dummy.three(1,2)'
so I would be grateful if someone could tell me how to catch the return values of the function executed remotely in order to send it back to the client. So far, when the server catches the three commands I send and executes them, I get the following result from print(command):
Test server listening on port 5004
None
None
None
I am providing with my MWE scripts below for both server and client - they should execute without exceptions.
One last question would be on how to efficiently close the ports before exiting the server, so that I do not find error messages regarding the port status.
Thanks in advance
Server script
# Server code
# -*- coding: utf-8 -*-
"""
Created on Fri May 20 11:40:03 2022
#author: strelok
"""
from socket import socket, gethostbyname, AF_INET, SOCK_DGRAM
import sys
import os
import time
class Dummy:
def one(arg1,arg2):
a = f"I got {arg1} and {arg2}"
return a
def two(arg1):
a = f"I got {arg1}"
# print(a)
return a
def three(arg1,arg2):
a = arg1 + arg2
# print(f"the answer is {a}")
return a
PORT_NUMBER = 5004
SIZE = 1024
hostName = gethostbyname( '0.0.0.0' )
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.bind( (hostName, PORT_NUMBER) )
print ("Test server listening on port {0}\n".format(PORT_NUMBER))
while True:
(data,addr) = mySocket.recvfrom(SIZE)
command = exec(str(data, encoding = 'utf-8'))
print(command)
# mySocket.send(bytes(command))
sys.exit()
Client script
# Client code
# -*- coding: utf-8 -*-
"""
Created on Fri May 20 11:34:32 2022
#author: strelok
"""
import sys
from socket import socket, AF_INET, SOCK_DGRAM
import time
def send_command(text):
message = bytes(text, encoding = 'utf-8')
print(message)
mySocket.send(message)
time.sleep(.5)
SERVER_IP = '131.169.33.198'
PORT_NUMBER = 5004
SIZE = 1024
print ("Test client sending packets to IP {0}, via port {1}\n".format(SERVER_IP, PORT_NUMBER))
mySocket = socket( AF_INET, SOCK_DGRAM )
mySocket.connect((SERVER_IP,PORT_NUMBER))
# Sending the commands to be executed remotely
send_command("Dummy.one(1,2)")
send_command("Dummy.two(1)")
send_command("Dummy.three(1,2)")
sys.exit()
I'm a bit lost in Google-results.
I have a webserver running through a Python built-in module.
This works and provides an area to download automation configuration for devices.
However, I like to build an availability test to provide better code.
If http service is not available, then proceed with tftp server.
I tested with urllib2, but then the website gets timed out. If I try the same code with a named uri 'www.google.com' then the response code 200 is provided.
Any thoughts to provide a good solution?
in the code I provide a ping test.
now like to add 'service-available' test. on TRUE - proceed with GET.
should work on IP-ADDRESS eg.: 10.1.1.1:80
Thank you for the help.
code ex:
import httplib2
h = httplib2.Http(".cache")
(resp_headers, content) = h.request("http://10.10.1.1:80/", "GET")
print(resp_headers)
print(content)
Try this. On a successful ping, the script will attempt to connect to an HTTP service. If the request times out or is not successful (e.g., status 404), it will call your TFTP function:
import shlex
import socket
import subprocess
import httplib2
ip_address = "10.1.1.1"
print("Attempting to ping...")
p = subprocess.Popen(shlex.split("ping -c 3 {0}".format(ip_address)), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, _ = p.communicate()
rc = p.returncode
if rc != 0 or "100% packet loss" in output.decode():
raise RuntimeError("Unable to connect to {0}.".format(ip_address))
print("Ping successful.")
print("Attempting to connect to an HTTP service....")
h = httplib2.Http(cache=".cache", timeout=10)
response = {}
content = ""
try:
response, content = h.request("http://{0}:80".format(ip_address), method="GET")
if 200 <= int(response["status"]) <= 299:
print("Connected to HTTP service.")
for key, value in response.items():
print(key, ":", value)
print(content.decode())
else:
print("Request not OK. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here
except socket.timeout:
print("Request timed out. Attempting to connect to a TFTP service...")
# Call to TFTP function goes here
I am setting up a new TCP Server connected to client through an Ethernet TCP/IP modbus and is supposed to push certain values to a given modbus register (hr = 6022), every few seconds. I do not see any exceptions/errors raised by the script but no data is received by the client. With a StartTCPserver command, I expected to see any network traffic (atleast the handshake) but I do not see any traffic on Wireshark. What could be the next possible diagnostic?
I have tried running similar script locally (without an external ethernet connection); One acting as a client and another as a server and did see the values update on the client register.
from pymodbus.server.sync import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
import time
import logging
FORMAT = ('%(asctime)-15s %(threadName)-15s'
' %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def run_server():
store = ModbusSlaveContext(
ir=ModbusSequentialDataBlock(6022, [152, 276]),
zero_mode=True
)
context = ModbusServerContext(slaves=store, single=True)
StartTcpServer(context, address=("192.168.10.2", 502))
if __name__ == "__main__":
run_server()
The lines after run_server() are never reached. Code connecting to the server can be placed in a different script;
from pymodbus.client.sync import ModbusTcpClient as ModbusClient
cli = ModbusClient('192.168.10.2', port=502)
assert cli.connect()
res = cli.read_input_registers(6022, count=1, unit=1)
print(res.registers[0])
I have a little problem with my code, I'm doing a HTTP Proxy Server and I send it a random number of HTTP Request and I want that my program close when I stop of send.
I think the problem is in the accept because the program still working always
I tried to put a recv after the accept for checking if there if empty but the program does't arrive there
My code is the following
from socket import *
from _thread import *
MAX_DATA_RECV = 4096 # max number of bytes we receive at once
def start(port_5, my_port):
s=socket(AF_INET, SOCK_STREAM)
s.bind(('', my_port))
s.listen(1)
while 1:
try:
conn, client_addr = s.accept()
except KeyboardInterrupt:
print('\nProgram closed. Interrupted by the user')
exit()
proxy_thread(conn, client_addr)
s.close()
def proxy_thread(conn, client_addr):
# get the request from browser
request = conn.recv(MAX_DATA_RECV).decode('utf-8')
# parse the first line
first_line = request.split('n')[0]
# get url
url = first_line.split(' ')[1]
# find the webserver and port
http_pos = url.find("://") # find pos of ://
if (http_pos==-1):
temp = url
else:
temp = url[(http_pos+3):] # get the rest of url
port_pos = temp.find(":") # find the port pos (if any)
# find end of web server
webserver_pos = temp.find("/")
if webserver_pos == -1:
webserver_pos = len(temp)
webserver = ""
port = -1
if (port_pos==-1 or webserver_pos < port_pos): # default port
port = 80
webserver = temp[:webserver_pos]
else: # specific port
port = int((temp[(port_pos+1):])[:webserver_pos-port_pos-1])
webserver = temp[:port_pos]
print("Connect to:", webserver, port)
# create a socket to connect to the web server
s = socket(AF_INET, SOCK_STREAM)
s.connect((webserver, port))
s.send(request.encode()) # send request to webserver
print(temp)
while 1:
# receive data from web server
data = s.recv(MAX_DATA_RECV)
if (len(data) > 0):
# send to browser
conn.send(data)
else:
break
s.close()
conn.close()
If someone is able to help me, thanks in advance
I want that my program close when I stop of send
At the network level there is no implicit indicator that the other side will not send anymore. If all requests would be done through a single TCP connection then the end of the connection might be treated as such an indicator. But you are using a new TCP connection for every request so you need to define your own condition(s) how the server should determine that the client will not send anymore.
This could for example a timeout, i.e. if the client has not sent any more request for 20 seconds then the client is treated as dead. Or it might be a special message by the client to signal the end - in which case your code needs to explicitly look for this message.
I'm attempting to use Splinter (a package for Selenium) in multiple instances. However, instances aren't starting until the first thread completely finishes. So, the browser instance opens up, loads the page, sleeps, and only then does the 2nd thread start.
I need to run instances at the same time.
import threading
import time
from splinter import Browser
def worker(proxy, port):
proxy_settings = {"network.proxy.type": 1,
"network.proxy.ssl": proxy,
"network.proxy.ssl_port": port,
"network.proxy.socks": proxy,
"network.proxy.socks_port": port,
"network.proxy.socks_remote_dns": True,
"network.proxy.ftp": proxy,
"network.proxy.ftp_port": port
}
browser = Browser('firefox',
profile_preferences=proxy_settings,
capabilities={'pageLoadStrategy': 'eager'}) #eager or normal
print("Proxy: ", proxy, ":", proxy)
browser.visit("https://mxtoolbox.com/whatismyip/?" + proxy)
time.sleep(2)
ip1 = '22.22.222.222'
ip2 = '222.222.22.222'
p1 = int(2222)
p2 = int(2222)
p = []
p.append((ip1,p1))
p.append((ip2,p2))
x = 0
for pp in p:
threading.Thread(target=worker(pp[0], pp[1])).start()
In my longer code (the above is my attempt to figure out why I can't multi-thread) Im also getting an error in my editor that
Local variable 'browser' value is not used
That's because you're not starting a thread with the worker function, actually the last line should rather look like:
threading.Thread(target=worker, args=(pp[0], pp[1])).start()
As for your editor issue, I'd say it is editor dependent and without any information, it's hard to say (I'd mention that running pylint does not indicate such warning)