v2ray (vmess|vless|trojan|ss) test in python - python-3.x

Friends, how can I test the configurations of (vmess|vless|trojan|ss) with Python?
I need a function to test the speed of given v2ray configs

There is a project vmessping by v2fly that support only vmess but there is a LiteSpeedTest for trojan/ss
sample:
from subprocess import Popen, PIPE
def speedtest(vmesslink):
process = Popen(["./vmessspeed", vmesslink], stdout=PIPE)
stdout = process.communicate()[0]
return stdout

Related

How to run Python subprocess consistently and in parallel?

I wrote a python script that converts mp3 to wav with mpg123. Then ffmpeg takes the output of mpg123 and upsample it. Finally, ffmpeg output file uploads to cloud. All these steps must run consistently.
subprocess.run(['mpg123', ...])
subprocess.run(['ffmpeg', ...])
upload()
Suppose, I have many mp3 files and I'd like to run 10 threads at the same time. I know that Python offers threading in subprocess.Popen, threading, concurrent.futures and multiprocessing modules. What is the right way to parallelize this process?
You could use the MPipe library:
from mpipe import OrderedStage, Pipeline
def increment(value):
return value + 1
def double(value):
return value * 2
stage1 = OrderedStage(increment, 3)
stage2 = OrderedStage(double, 3)
pipe = Pipeline(stage1.link(stage2))
for number in range(10):
pipe.put(number)
pipe.put(None)
for result in pipe.results():
print(result)

how can restrict processes in parallel

I have this script, but I only want to start 3 perl processes at a time. Once these 3 are done, the script should start the next three.
at the moment all processes are started in parallel
unfortunately I don't know what to do. can someone help me?
my script:
import json, os
import subprocess
from subprocess import Popen, PIPE
list = open('list.txt', 'r')
procs = []
for dirs in list:
args = ['perl', 'test.pl', '-a', dirs]
proc = subprocess.Popen(args)
procs.append(proc)
for proc in procs:
proc.wait()
list.txt :
dir1
dir2
dir3
dir4
dir5
dir6
dir7
dir8
dir9
dir10
dir11
test.pl
$com=$ARGV[0];
$dirs=$ARGV[1];
print "$com $dirs";
sleep(5);
Use Python's concurrent.futures module - it has the figure of a 'Process Pool' that will automatically keep only that many worker process, and start new tasks as the older ones are completed.
As target function, put a simple Python function to open your external process, and wait synchronously for the result - a function with the lines currently inside your for loop.
Using concurrent.futures, your code might look like this:
import json, os
import subprocess
from concurrent.futures import ThreadPoolExecutor, as_completed
from subprocess import Popen, PIPE
mylist = open('list.txt', 'r')
def worker(dirs):
args = ['perl', 'test.pl', '-a']
proc = subprocess.run(args + [dirs])
executor = ThreadPoolExecutor(3) # 3 is: max-workers.
# ProcessPoolExecutor could be an option, but you don't need
# it - the `perl` process will run in other process anyway.
procs = []
for dirs in mylist:
proc = executor.submit(worker, dirs)
procs.append(proc)
for proc in as_completed(procs):
try:
result = proc.result()
except Exception as exc:
# handle any error that may have been raised in the worker
pass

Running a function or BASH command after exiting npyscreen

I'm trying to run a function after entering in npyscreen, tried a few things and am still stuck. Just exits npyscreen and returns to a bash screen. This function is supposed to start a watchdog/rsync watch-folder waiting for files to backup.
#!/usr/bin/env python
# encoding: utf-8
import npyscreen as np
from nextscript import want_to_run_this_function
class WuTangClan(np.NPSAppManaged):
def onStart(self):
self.addForm('MAIN', FormMc, name="36 Chambers")
class FormMc(np.ActionFormExpandedV2):
def create(self):
self.rza_gfk = self.add(np.TitleSelectOne, max_height=4, name="Better MC:", value=[0], values=["RZA", "GhostFace Killah"], scroll_exit=True)
def after_editing(self):
if self.rza_gfk.value == [0]:
want_to_run_this_function()
self.parentApp.setNextForm(None)
else:
self.parentApp.setNextForm(None)
if __name__ == "__main__":
App = WuTangClan()
App.run()
I not sure if i understood correctly what you want.
For executing any kind of bash command i like to use subprocess module, he has the Popen constructor, which you can use to run anything from a bash.
e.g, on windows
import subprocess
process = subprocess.Popen(['ipconfig','/all'])
On unix like system:
import subprocess
process = subprocess.Popen(['ip','a'])
If you have a ".py" file you can pass the parameters like if you where running it from the terminal
e.g
import subprocess
process = subprocess.Popen(['python3','sleeper.py'])
You can even retrieve the process pid and kill it whenever you want, you can look at subprocess module documentation here

Can't capture stdout with unittest

I have a python3.7 script, which takes a YAML file as input and processes it depending on the instructions within. The YAML file I am using for unit testing looks like this:
...
tasks:
- echo '1'
- echo '2'
- echo '3'
- echo '4'
- echo '5'
The script loops over tasks and then runs each one, using os.system() call.
The manual testing indicates, that the output is as expected:
1
2
3
4
5
But I can't make it work in my unit test. Here's how I'm trying to capture the output:
from application import application
from io import StringIO
import unittest
from unittest.mock import patch
class TestApplication(unittest.TestCase):
def test_application_tasks(self):
expected = ['1','2','3','4','5']
with patch('sys.stdout', new=StringIO()) as fakeOutput:
application.parse_event('some event') # print() is called here within parse_event()
self.assertEqual(fakeOutput.getvalue().strip().split(), expected)
When running python3 -m unittest discover -s tests, all I get is AssertionError: Lists differ: [] != ['1', '2', '3', '4', '5'].
I also tried using with patch('sys.stdout', new_callable=StringIO) as fakeOutput: instead, but to no avail.
Another thing I tried was self.assertEqual(fakeOutput.getvalue(), '1\n2\n3\n4\n5'), and here what the unittest outputs:
AssertionError: '' != '1\n2\n3\n4\n5'
+ 1
+ 2
+ 3
+ 4
+ 5
Obviously, the script works and outputs the right result, but fakeOutput does not capture it.
Using patch as a decorator does not work either:
from application import application
from io import StringIO
import unittest
from unittest.mock import patch
class TestApplication(unittest.TestCase):
#patch('sys.stdout', new_callable=StringIO)
def test_application_tasks(self):
expected = ['1','2','3','4','5']
application.parse_event('some event') # print() is called here within parse_event()
self.assertEqual(fakeOutput.getvalue().strip().split(), expected)
Would output absolutely the same error: AssertionError: Lists differ: [] != ['1', '2', '3', '4', '5']
os.system runs a new process. If you monkey-patch sys.stdout this affects the current process but has no consequences for any new processes.
Consider:
import sys
from os import system
from io import BytesIO
capture = sys.stdout = BytesIO()
system("echo Hello")
sys.stdout = sys.__stdout__
print(capture.getvalue())
Nothing is captured because only the child process has written to its stdout. Nothing has written to the stdout of your Python process.
Generally, avoid os.system. Instead, use the subprocess module which will let you capture output from the process that is run.
Thank you, Jean-Paul Calderone. I realized the fact, that os.system() creates a completely different process and therefore I need to tackle the problem differently, only after I posted the question :)
To actually be able to test my code, I had to rewrite it using subprocess instead of os.system(). In the end, I went with subprocess_run_result = subprocess.run(task, shell=True, stdout=subprocess.PIPE) and then getting the result using subprocess_run_result.stdout.strip().decode("utf-8").
In the tests I just create an instance of class and call a method, which runs the tasks in subprocess.
My whole refactored code and tests are here in this commit if anyone would like to take a look.
Your solution is fine, just use getvalue instead, like so:
with patch("sys.stdout", new_callable=StringIO) as f:
print("Foo")
r = f.getvalue()
print("r: {r!r} ;".format(r=r))
r: "Foo" ;

get console output

how can i receive the console output of any python file (errors, everything printed using the print() command)?
example:
main.py starts test.py and gets its output
You can use sys.stderr from the sys module. print() uses sys.stdout by default.
import sys
# Print to standard output stream
print("Normal output...")
# Print to standard error stream
print("Error output...", file=sys.stderr)
i dont know if this will help but im sure it will give you an idea.
in vb.net or c#.net we capture console stream using the process StandardOutput like this:
Dim p as Process = Process.Start("cmd")
p.WaitForExit(Integer.MaxValue)
dim output as String = p.StandardOutput

Resources