How to subprocess a pymodbus tcp server in a python unit test and terminate it after all the tests have been executed? - python-3.x

I will try to give you some context before presenting the issue I am facing.
I have a component called Actuator which relies on the module pymodbus. When I tested this component, I did it in the easiest possible way using a modbus TCP server based on pymodbus. Basically, I run the server as a python script (which is exactly this one: https://pymodbus.readthedocs.io/en/latest/source/example/synchronous_server.html) in a new shell and my application, which includes the Actuator, in another shell. Everything works like a charm.
What I want to do now, is to write a unit test for the Actuator, using python unittest and pymodbus and try to automate what I was doing before. My idea was to run the modbus server as a subprocess (consider that I do not need the output from the server) in the setUp method, use it in my test suite and then terminate it in the tearDown method, like this:
class TestActuator(unittest.TestCase):
"""Class to test the actuator module"""
def setUp(self):
"""
Setup a real Actuator object with a modbus server running
in background for tests
"""
modbus_server_path = "path/to/modbus_server.py"
cmd = "python3 {}".format(modbus_server_path)
self.modbus_server = Popen(cmd.split(), shell=False, stdout=DEVNULL, stderr=DEVNULL)
# other stuff
def test1(self):
def test2(self):
def tearDown(self):
"""Cleanup everything before exiting"""
self.modbus_server.terminate()
if __name__ == '__main__':
unittest.main()
Unfortunately, it seems more challenging than I expected.
Everything I tried from other topics on stackoverflow failed:
Use Popen.kill() instead of terminate. I tried also to "del" after kill or terminate or to use os.kill(self.modbus_server.pid, SIGTERM) instead.
Add or change args to the Popen command, such as shell=True instead of shell=False and close_fds=True.
Use the other variants of subprocess Popen, like check_output, run, call, etc...
Use os.spawnl instead of subprocess Popen.
None of these worked. What happens most of the times is that the server fails to start properly, so all the other tests fail and the modbus_server process is not terminated (I have to kill it manually).
Do you have any idea? Thank you.

I will share how I solved my issue and what I learned so far:
About the attempt to make the modbus TCP server running as a subprocess while running tests, you need to change a few things to make it work decently.
The first thing is to use "setUpClass" and "tearDownClass" instead of setUp and tearDown, (you can use also both of them but in my case I needed the first two) so that you can run the setup of your server just once for all the test suite and not before and after each test case. Unfortunately, the syntax for setUpClass and tearDownClass is not so straightforward. This answer helped me a lot: Run setUp only once for a set of automated tests
The second thing is about the TCP socket. Basically, I wanted to be sure that my tests worked fine and run them a few times in a row. So, sometimes everything worked fine, other times they failed. After a while I found that if I run the tests without waiting at least one minute before trying again, they will fail.
This is due to the TIME_WAIT (set to 60 seconds by default) of the TCP socket bound by the modbus server on the local address and port 5020 in my case. If you want to re-bind a socket on the same TCP address and port you used before, you will have to wait 60 seconds. This answer was useful to understand why this happens: Setting TIME_WAIT TCP
After I figure out how to make all that stuff work together, I decided to use mocks because my solution seemed too hacky for my tastes.

Related

How to pass "yes/no" to prompt from python code?

I have tiny little python code which logs in to a remote machine and gets the output of a command and prints to a file..
Here for a particular command, the server asks for Y or N, how do i pass an Yes to it and get the desired output?
Here is the sample output from the server:
root#nnodee11cc40c:/home/usr/redsuren# controlport rst 0:3:3
WARNING: Port 0:3:3 may have active Smart SAN target driven zones that may be disrupted. If changing the port configuration, remove the Smart SAN zoning by using the removehost command. This must be done before changing the port configuration; otherwise, you will not be able to manage the zone on the switch associated with this port.
Are you sure you want to run controlport rst on port 0:3:3?
select q=quit y=yes n=no: ---------> Here i have to tell the program to enter y
How can I achieve this?
Thanks!!
If I understand your question correctly, you need to automate pressing 'y' & 'enter' with Python. You can easily do this by PyAutoGui.
First, execute pip install pyautogui in the command prompt.
Then import it to your code by using import pyautogui
Now for achieving this you have to put the following code where you want to press 'y' & 'enter':
pyautogui.press('y')
pyautogui.press('enter')
But this may not be timed according to when it asks so you may need to time it yourself
by adding time.sleep(<numberOfSeconds>) after importing it by import time
Now here is the full answer:
import pyautogui, time
# Your code here
time.sleep(3)
pyautogui.press('y')
pyautogui.press('enter')
But if my answer is not what you asked for, then you have to give us your code so we can understand your question better thus answer it to your needs.

Determine if Javascript (NodeJS) code is running in a REPL

I wish to create one NodeJS source file in a Jupyter notebook which is using the IJavascript kernel so that I can quickly debug my code. Once I have it working, I can then use the "Download As..." feature of Jupyter to save the notebook as a NodeJS script file.
I'd like to have the ability to selectively ignore / include code in the notebook source that will not execute when I run the generated NodeJS script file.
I have solved this problem for doing a similar thing for Python Jupyter notebooks because I can determine if the code is running in an interactive session (IPython [REPL]). I accomplished this by using this function in Python:
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
(Thanks to Tell if Python is in interactive mode)
Is there a way to do a similar thing for NodeJS?
I don't know if this is the correct way but couldn't find anything else
basically if you
try {
const repl = __dirname
} catch (err) {
//code run if repl
}
it feels a little hacky but works ¯\_(ツ)_/¯
This may not help the OP in all cases, but could help others googling for this question. Sometimes it's enough to know if the script is running interactively or not (REPL and any program that is run from a shell).
In that case, you can check for whether standard output is a TTY:
process.stdout.isTTY
The fastest and most reliable route would just be to query the process arguments. From the NodeJS executable alone, there are two ways to launch the REPL. Either you do something like this without any script following the call to node.
node --experimental-modules ...
Or you force node into the REPL using interactive mode.
node -i ...
The option ending parameter added in v6.11.0 -- will never append arguments into the process.argv array unless it's executing in script mode; via FILE, -p, or -e. Any arguments meant for NodeJS will be filtered into the accompanying process.execArgv variable, so the only thing left in the process.argv array should be process.execPath. Under these circumstances, we can reduce the query to the solution below.
const isREPL = process.execArgv.includes("-i") || process.argv.length === 1;
console.log(isREPL ? "You're in the REPL" : "You're running a script m8");
This isn't the most robust method since any user can otherwise instantiate a REPL from an intiator script which your code could be ran by. For that I'm pretty sure you could use an artificial error to crawl the traceback and look for a REPL entry. Although I haven't the time to implement and ensure that solution at this time.

How to completely exit a running asyncio script in python3

I'm working on a server bot in python3 (using asyncio), and I would like to incorporate an update function for collaborators to instantly test their contributions. It is hosted on a VPS that I access via ssh. I run the process in tmux and it is often difficult for other contributors to relaunch the script once they have made a commit, etc. I'm very new to python, and I just use what I can find. So far I have used subprocess.Popen to run git pull, but I have no way for it to automatically restart the script.
Is there any way to terminate a running asyncio loop (ideally without errors) and restart it again?
You can not start a event loop stopped by event_loop.stop()
And in order to incorporate the changes you have to restart the script anyways (some methods might not exist on the objects you have, etc.)
I would recommend something like:
asyncio.ensure_future(git_tracker)
async def git_tracker():
# check for changes in version control, maybe wait for a sync point and then:
sys.exit(0)
This raises SystemExit, but despite that exits the program cleanly.
And around the python $file.py a while true; do git pull && python $file.py ; done
This is (as far as I know) the simplest approach to solve your problem.
For your use case, to stay on the safe side, you would probably need to kill the process and relaunch it.
See also: Restart process on file change in Linux
As a necromancer, I thought I give an up-to-date solution which we use in our UNIX system.
Using the os.execl function you can tell python to replace the current process with a new one:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
In our case, we have a bash script which executes the killall python3.7, sending the SIGTERM signal to our python apps which in turn listen to it via the signal module and gracefully shutdown:
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe(loop.stop)
sys.exit(0)
The script than starts the apps in background and finishes.
Note that killall python3.7 will send SIGTERM signal to every python3.7 process!
When we need to restart we jus rune the following command:
os.execl("./restart.sh", 'restart.sh')
The first parameter is the path to the file and the second is the name of the process.

python script gets killed by test for stdout

I'm writing a CGI script that is supposed to send data to a user until they disconnect, then run logging tasks afterwards.
THE PROBLEM: Instead of break executing and the logging getting completed when the client disconnects (detected by inability to write to the stdout buffer), the script ends or is killed (I cannot find any logs anywhere for how this exit is occurring)
Here is a snippet of the code:
for block in r.iter_content(262144):
if stopRecord == True:
r.close()
if not block:
break
if not sys.stdout.buffer.write(block): #The code fails here after a client disconnects
break
cacheTemp.close()
####write data to other logs and exit gracefully####
I have tried using "except:" as well as "except SystemExit:" but to no avail. Has anyone been able to solve this problem? (It is for a CGI script which is supposed to log when the client terminates their connection)
UPDATE: I have now tried using signal to interrupt the kill process in the script, which also didn't work. Where can I see an error log? I know exactly which line fails and under which conditions, but there is no error log or anything like I would get if I ran a script which failed in a terminal.
When you say it kills the program, you mean the main python process exits - and not by some thrown exception? That's kinda weird. A workaround might be to have the task run in a separate Thread or process, and then monitor that until it dies and subsequently execute the second task.

Python - pySerials inWaiting() always return 0

I'm trying to make a small program that receives messages from the serial port, and doing it periodically. Right now I'm not saving anything, I'm just trying to see if I get anything at all, so I tried this code:
def ReceiveRS():
global ser
while ser.inWaiting() > 0:
print(ser.read(1))
ser is the serial port, which is correctly initialized, as it has worked before and I can send stuff. After trying some different things I have found out that inWaiting() never seems to return anything but 0. Anyone have any ideas as to why and how to fix it?
Oh, and I'm using Python 3.2.3, with pySerial on a Raspberry PI.
This was embarrassing. I had another older version running in the background (on autostart so I didn't remember it was running) that was taking all the received bytes and leaving none for the newer script. So, anyone know how to remove questions?
try this
while (True):
if ser.inWaiting() > 0:
print(ser.read(1))
should be working now as expected.

Resources