Accessing list of registered factories/services in wired/pyramid_services - pyramid

I'm trying to debug my usage of wired and pyramid_services as well as migrate from using named services to registering services with interfaces and context classes.
Is there a way to see everything that is registered with the current container? Both for debugging and also to create fixtures for pytest during testing. Sort of like the get_registrations line of this pseudo code for injecting tests into conftest.py for pytests:
def generate_service_fixture(reg):
#pytest.fixture()
def service_fixture(base_app_request):
return base_app_request.find_service(iface=reg.iface, context=reg.context, name=reg.name)
return service_fixture
def inject_service_fixture(reg):
parts = [
get_iface_name(reg.iface),
get_context_name(reg.context),
get_name(reg.name)]
# Make up a name that tests can use to pull in the appropriate fixture.
fixture_name = '__'.join(filter(None, parts)) + '_service'
globals()[fixture_name] = generate_service_fixture(reg)
def get_iface_name(iface):
return iface.__name__ if iface else None
def get_context_name(context):
return context.__name__ if context else None
def get_name(name):
return name if name else None
def register_fixtures(container):
for reg in container.get_registrations():
inject_service_fixture(reg)
Later on in tests I would do something like:
def test_service_factory(IRequest_service):
assert IRequest_service, "Factory failed to construct request."

This sort of works for debugging after the services have been declared. I'm just posting this half-answer for now. I don't have a clean solution for dynamic pytest fixture creation.
def includeme(config):
# ...
config.commit()
introspector = config.registry.introspector
for intr in introspector.get_category('pyramid_services'):
print (intr['introspectable'])

Related

Disable paralled build for a specific target

I need to disable parallel run for a single target. It is a test that verifies if program doesn't create some random or incorrectly named files. Any other file that is build in the meantime fails this test.
I found this advice on SCons FAQ:
Use the SideEffect() method and specify the same dummy file for each target that shouldn't be built in parallel. Even if the file doesn't exist, SCons will prevent the simultaneous execution of commands that affect the dummy file. See the linked method page for examples.
However, this is useless, as it would prevent parallel build of any two targets not only the test script.
Is there any way to prevent parallel build of one target while allowing it for all others?
We discussed this in the scons discord, and came up with an example which will setup synchronous test runners which will make sure no other tasks are running when the test is run.
This is the example SConstruct from the github example repo:
import SCons
# A bound map of stream (as in stream of work) name to side-effect
# file. Since SCons will not allow tasks with a shared side-effect
# to execute concurrently, this gives us a way to limit link jobs
# independently of overall SCons concurrency.
node_map = dict()
# A list of nodes that have to be run synchronously.
# sync node ensures the test runners are syncrhonous amongst
# themselves.
sync_nodes = list()
# this emitter will make a phony sideeffect per target
# the test builders will share all the other sideeffects making
# sure the tests only run when nothing else is running.
def sync_se_emitter(target, source, env):
name = str(target[0])
se_name = "#unique_node_" + str(hash(name))
se_node = node_map.get(se_name, None)
if not se_node:
se_node = env.Entry(se_name)
# This may not be necessary, but why chance it
env.NoCache(se_node)
node_map[se_name] = se_node
for sync_node in sync_nodes:
env.SideEffect(se_name, sync_node)
env.SideEffect(se_node, target)
return (target, source)
# here we force all builders to use the emitter, so all
# targets will respect the shared sideeffect when being built.
# NOTE: that the builders which should be synchronous must be listed
# by name, as SynchronousTestRunner is in this example
original_create_nodes = SCons.Builder.BuilderBase._create_nodes
def always_emitter_create_nodes(self, env, target = None, source = None):
if self.get_name(env) != "SynchronousTestRunner":
if self.emitter:
self.emitter = SCons.Builder.ListEmitter([self.emitter, sync_se_emitter])
else:
self.emitter = SCons.Builder.ListEmitter([sync_se_emitter])
return original_create_nodes(self, env, target, source)
SCons.Builder.BuilderBase._create_nodes = always_emitter_create_nodes
env = Environment()
env.Tool('textfile')
nodes = []
# this is a fake test runner which acts like its running a test
env['BUILDERS']["SynchronousTestRunner"] = SCons.Builder.Builder(
action=SCons.Action.Action([
"sleep 1",
"echo Starting test $TARGET",
"sleep 5",
"echo Finished test $TARGET",
'echo done > $TARGET'],
None))
# this emitter connects the test runners with the shared sideeffect
def sync_test_emitter(target, source, env):
for name in node_map:
env.SideEffect(name, target)
sync_nodes.append(target)
return (target, source)
env['BUILDERS']["SynchronousTestRunner"].emitter = SCons.Builder.ListEmitter([sync_test_emitter])
# in this test we create two test runners and make them depend on various source files
# being generated. This is just to force the tests to be run in the middle of
# the build. This will allow the example to demonstrate that all other jobs
# have paused so the test can be performed.
env.SynchronousTestRunner("test.out", "source10.c")
env.SynchronousTestRunner("test2.out", "source62.c")
for i in range(50):
nodes.append(env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}"))
for i in range(50, 76):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test.out")
nodes.append(node)
for i in range(76, 100):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test2.out")
nodes.append(node)
nodes.append(env.Textfile('main.c', 'int main(){return 0;}'))
env.Program('out', nodes)
This solution is based in dmoody256's answer.
The underlying concept is the same but the code should be easier to use and it's ready to be put in the site_scons directory to not obfuscate SConstruct itself.
site_scons/site_init.py:
# Allows using functions `SyncBuilder` and `Environment.SyncCommand`.
from SyncBuild import SyncBuilder
site_scons/SyncBuild.py:
from SCons.Builder import Builder, BuilderBase, ListEmitter
from SCons.Environment import Base as BaseEnvironment
# This code allows to build some targets synchronously, which means there won't
# be anything else built at the same time even if SCons is run with flag `-j`.
#
# This is achieved by adding a different dummy values as side effect of each
# target. (These files won't be created. They are only a way of enforcing
# constraints on SCons.)
# Then the files that need to be built synchronously have added every dummy
# value from the entire configuration as a side effect, which effectively
# prevents it from being built along with any other file.
#
# To create a synchronous target use `SyncBuilder`.
__processed_targets = set()
__lock_values = []
__synchronous_nodes = []
def __add_emiter_to_builder(builder, emitter):
if builder.emitter:
if isinstance(builder.emitter, ListEmitter):
if not any(x is emitter for x in builder.emitter):
builder.emitter.append(emitter)
else:
builder.emitter = ListEmitter([builder.emitter, emitter])
else:
builder.emitter = ListEmitter([emitter])
def __individual_sync_locks_emiter(target, source, env):
if not target or target[0] not in __processed_targets:
lock_value = env.Value(f'.#sync_lock_{len(__lock_values)}#')
env.NoCache(lock_value)
env.SideEffect(lock_value, target + __synchronous_nodes)
__processed_targets.update(target)
__lock_values.append(lock_value)
return target, source
__original_create_nodes = BuilderBase._create_nodes
def __create_nodes_adding_emiter(self, *args, **kwargs):
__add_emiter_to_builder(self, __individual_sync_locks_emiter)
return __original_create_nodes(self, *args, **kwargs)
BuilderBase._create_nodes = __create_nodes_adding_emiter
def _all_sync_locks_emitter(target, source, env):
env.SideEffect(__lock_values, target)
__synchronous_nodes.append(target)
return (target, source)
def SyncBuilder(*args, **kwargs):
"""It works like the normal `Builder` except it prevents the targets from
being built at the same time as any other target."""
target = Builder(*args, **kwargs)
__add_emiter_to_builder(target, _all_sync_locks_emitter)
return target
def __SyncBuilder(self, *args, **kwargs):
"""It works like the normal `Builder` except it prevents the targets from
being built at the same time as any other target."""
target = self.Builder(*args, **kwargs)
__add_emiter_to_builder(target, _all_sync_locks_emitter)
return target
BaseEnvironment.SyncBuilder = __SyncBuilder
def __SyncCommand(self, *args, **kwargs):
"""It works like the normal `Command` except it prevents the targets from
being built at the same time as any other target."""
target = self.Command(*args, **kwargs)
_all_sync_locks_emitter(target, [], self)
return target
BaseEnvironment.SyncCommand = __SyncCommand
SConstruct (this is adapted dmoody256's test that does the same thing as the original):
env = Environment()
env.Tool('textfile')
nodes = []
# this is a fake test runner which acts like its running a test
env['BUILDERS']["SynchronousTestRunner"] = SyncBuilder(
action=Action([
"sleep 1",
"echo Starting test $TARGET",
"sleep 5",
"echo Finished test $TARGET",
'echo done > $TARGET'],
None))
# in this test we create two test runners and make them depend on various source files
# being generated. This is just to force the tests to be run in the middle of
# the build. This will allow the example to demonstrate that all other jobs
# have paused so the test can be performed.
env.SynchronousTestRunner("test.out", "source10.c")
env.SynchronousTestRunner("test2.out", "source62.c")
for i in range(50):
nodes.append(env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}"))
for i in range(50, 76):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test.out")
nodes.append(node)
for i in range(76, 100):
node = env.Textfile(f"source{i}.c", f"int func{i}(){{return {i};}}")
env.Depends(node, "test2.out")
nodes.append(node)
nodes.append(env.Textfile('main.c', 'int main(){return 0;}'))
env.Program('out', nodes)
After creating site_scons/site_init.py and site_scons/SyncBuild.py, you can just use function SyncBuilder or method Environment.SyncCommand in any SConstruct or SConscript file in the project without any additional configuration.

How to use google-cloud-os-config classes in python code?

In a Google Cloud function (python 3.7) , I need to fetch the compliance state of all VMs in a given location in a project.
From available google documentation here I could see the REST API format:
https://cloud.google.com/compute/docs/os-configuration-management/view-compliance#view_compliance_state
On searching for the client library here , I found this:
class google.cloud.osconfig_v1alpha.types.ListInstanceOSPoliciesCompliancesRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]
Bases: proto.message.Message
A request message for listing OS policies compliance data for all Compute Engine VMs in the given location.
parent
Required. The parent resource name.
Format: projects/{project}/locations/{location}
For {project}, either Compute Engine project-number or project-id can be provided.
Type
str
page_size
The maximum number of results to return.
Type
int
page_token
A pagination token returned from a previous call to ListInstanceOSPoliciesCompliances that indicates where this listing should continue from.
Type
str
filter
If provided, this field specifies the criteria that must be met by a InstanceOSPoliciesCompliance API resource to be included in the response.
Type
str
And the response class as:
class google.cloud.osconfig_v1alpha.types.ListInstanceOSPoliciesCompliancesResponse(mapping=None, *, ignore_unknown_fields=False, **kwargs)[source]
Bases: proto.message.Message
A response message for listing OS policies compliance data for all Compute Engine VMs in the given location.
instance_os_policies_compliances
List of instance OS policies compliance objects.
Type
Sequence[google.cloud.osconfig_v1alpha.types.InstanceOSPoliciesCompliance]
next_page_token
The pagination token to retrieve the next page of instance OS policies compliance objects.
Type
str
property raw_page
But I am not sure how to use this information in the python code.
I have written this but not sure if this is correct:
from google.cloud.osconfig_v1alpha.services.os_config_zonal_service import client
from google.cloud.osconfig_v1alpha.types import ListInstanceOSPoliciesCompliancesRequest
import logging
logger = logging.getLogger(__name__)
import os
def handler():
try:
project_id = os.environ["PROJECT_ID"]
location = os.environ["ZONE"]
#list compliance state
request = ListInstanceOSPoliciesCompliancesRequest(
parent=f"projects/{project}/locations/{location}")
response = client.instance_os_policies_compliance(request)
return response
except Exception as e:
logger.error("Unable to get compliance - %s " % str(e))
I could not find any usage example for the client library methods anywhere.
Could someone please help me here?
EDIT:
This is what I am using now:
from googleapiclient.discovery import build
def list_policy_compliance():
projectId = "my_project"
zone = "my_zone"
try:
service = build('osconfig', 'v1alpha', cache_discovery=False)
compliance_response = service.projects().locations(
).instanceOsPoliciesCompliances().list(
parent='projects/%s/locations/%s' % (
projectId, zone)).execute()
return compliance_response
except Exception as e:
raise Exception()
Something like this should work:
from google.cloud import os_config_v1alpha as osc
def handler():
client = osc.OsConfigZonalService()
project_id = "my_project"
location = "my_gcp_zone"
parent = f"projects/{project_id}/locations/{location}"
response = client.list_instance_os_policies_compliances(
parent=parent
)
# response is an iterable yielding
# InstanceOSPoliciesCompliance objects
for result in response:
# do something with result
...
You can also construct the request like this:
response = client.list_instance_os_policies_compliances(
request = {
"parent": parent
}
)
Answering my own question here , this is what I used:
from googleapiclient.discovery import build
def list_policy_compliance():
projectId = "my_project"
zone = "my_zone"
try:
service = build('osconfig', 'v1alpha', cache_discovery=False)
compliance_response = service.projects().locations(
).instanceOsPoliciesCompliances().list(
parent='projects/%s/locations/%s' % (
projectId, zone)).execute()
return compliance_response
except Exception as e:
raise Exception()

Caching values on a module level and unit testing

Below is a module for querying and caching AWS STS tokens, the intention is to avoid querying STS if there is a valid token.
class Credentials:
def __init__(self):
self.sts_credentials = None
self.token_expiry_time = None
def is_token_expired(self):
current_time_with_buffer = datetime.now() + timedelta(minutes=2)
return not self.token_expiry_time or self.token_expiry_time < current_time_with_buffer
CREDENTIALS_ = Credentials()
def get_credentials():
if CREDENTIALS_.is_token_expired():
sts_client = boto3.client('sts')
LOGGER.info("The credentials are either empty or expiring, refreshing")
try:
sts_token = sts_client.assume_role(
RoleArn=os.environ["KINESIS_ASSUME_ROLE"],
RoleSessionName=str(uuid.uuid4()))
except Exception as e:
LOGGER.error(f"Error occurred while trying to assume role with {os.environ['KINESIS_ASSUME_ROLE']}", e)
raise e
CREDENTIALS_.sts_credentials = {
"aws_access_key_id": sts_token['Credentials']['AccessKeyId'],
"aws_secret_access_key": sts_token['Credentials']['SecretAccessKey'],
"aws_session_token": sts_token['Credentials']['SessionToken']
}
CREDENTIALS_.token_expiry_time = sts_token["Credentials"]["Expiration"]
return CREDENTIALS_.sts_credentials
One of the unit tests is as below, this passes in isolation, but fails when run alongside other tests, the reason being CREDENTIALS_ variable, which is modified by other tests, I can set this value to None, but I want to know what is the cleaner way of clearing the cached value
def test_get_credentials_refreshes_token_if_about_to_expire(sts_response, credentials):
with mock.patch("boto3.client") as mock_boto_client:
mock_assume_role = mock_boto_client.return_value.assume_role
mock_assume_role.return_value = sts_response
get_credentials()
actual_credentials = get_credentials()
calls = [call('sts'),
call().assume_role(RoleArn='arn:aws:iam::000000000000:role/dummyarn', RoleSessionName=ANY),
call('sts'),
call().assume_role(RoleArn='arn:aws:iam::000000000000:role/dummyarn', RoleSessionName=ANY)]
assert credentials == actual_credentials
mock_boto_client.assert_has_calls(calls)
The cleaner way would be to make sure that your unit tests are performing unit tests. This means that for every unit there should be no interaction with other units. Since you are using a global variable CREDENTIALS_, this is going to be nearly impossible.
1) easy fix
An easy fix would be to pass CREDENTIALS_ as input argument. Then you can create a fake CREDENTIALS_ object during each of the tests, that are tailored to your test conditions.
2) Better fix
A better solution would be, besides using the credential input argument, to break up the logic inside the get_credentials. By splitting it into smaller functions, you can separate the server logic and the credential updating. Making it easier to Mock and test. A possible division of the whole function would be:
get_sts_token
update_credentials
get_credentials
Now the get_sts_token has connections to the server, but the update_credentials and get_credentials do not have to directly interact with it.
Code
Example 1)
def update_credentials(credentials):
if credentials.is_token_expired():
sts_client = boto3.client('sts')
LOGGER.info("The credentials are either empty or expiring, refreshing")
try:
sts_token = sts_client.assume_role(
RoleArn=os.environ["KINESIS_ASSUME_ROLE"],
RoleSessionName=str(uuid.uuid4()))
except Exception as e:
LOGGER.error(f"Error occurred while trying to assume role with {os.environ['KINESIS_ASSUME_ROLE']}", e)
raise e
credentials.sts_credentials = {
"aws_access_key_id": sts_token['Credentials']['AccessKeyId'],
"aws_secret_access_key": sts_token['Credentials']['SecretAccessKey'],
"aws_session_token": sts_token['Credentials']['SessionToken']
}
credentials.token_expiry_time = sts_token["Credentials"]["Expiration"]
return credentials
# Where you need the credentials
CREDENTIALS_ = update_credentials(CREDENTIALS_)
CREDENTIALS_.sts_credentials
Now you can insert your own CREDENTIALS_ object in the test.
Example 2)
def get_sts_token():
sts_client = boto3.client('sts')
LOGGER.info("The credentials are either empty or expiring, refreshing")
try:
sts_token = sts_client.assume_role(
RoleArn=os.environ["KINESIS_ASSUME_ROLE"],
RoleSessionName=str(uuid.uuid4()))
except Exception as e:
LOGGER.error(f"Error occurred while trying to assume role with {os.environ['KINESIS_ASSUME_ROLE']}", e)
raise e
return sts_token
def update_credentials(credentials, sts_token):
credentials.sts_credentials = {
"aws_access_key_id": sts_token['Credentials']['AccessKeyId'],
"aws_secret_access_key": sts_token['Credentials']['SecretAccessKey'],
"aws_session_token": sts_token['Credentials']['SessionToken']
}
return credentials
def get_credentials(credentials: Credentials):
if credentials.is_token_expired():
sts_token = get_sts_token()
credentials = update_credentials(credentials, sts_token)
return credentials.sts_credentials

How to update the value of pymodbus tcp server according to the message subscribed by zmq?

I am a newbie. My current project is when the current end decides to start the modbus service, I will create a process for the modbus service. Then the value is obtained in the parent process, through the ZeroMQ PUB/SUB to pass the value, I now want to update the value of the modbus register in the modbus service process.
I tried the method mentioned by pymodbus provided by updating_server.py, and twisted.internet.task.LoopingCall() to update the value of the register, but this will make it impossible for me to connect to my server with the client. I don't know why?
Use LoopingCall() to establish the server, the log when the client connects.
Then I tried to put both the uploading and startTCPserver in the async loop, but the update was only entered for the first time after the startup, and then it was not entered.
Currently, I'm using the LoopingCall() to handle updates, but I don't think this is a good way.
This is the code I initialized the PUB and all the tags that can read the tag.
from loop import cycle
import asyncio
from multiprocessing import Process
from persistence import models as pmodels
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
from zmq.asyncio import Context
from common import logging
from server.modbustcp import i3ot_tcp as sertcp
import common.config as cfg
import communication.admin as ca
import json
import os
import signal
from datetime import datetime
from server.opcuaserver import i3ot_opc as seropc
async def main():
future = []
task = []
global readers, readers_old, task_flag
logger.debug("connecting to database and create table.")
pmodels.connect_create()
logger.debug("init read all address to create loop task.")
cycle.init_readers(readers)
ctx = Context()
publisher = ctx.socket(zmq.PUB)
logger.debug("init publish [%s].", addrs)
publisher.bind(addrs)
readers_old = readers.copy()
for reader in readers:
task.append(asyncio.ensure_future(
cycle.run_readers(readers[reader], publisher)))
if not len(task):
task_flag = True
logger.debug("task length [%s - %s].", len(task), task)
opcua_server = LocalServer(seropc.opc_server, "opcua")
future = [
start_get_all_address(),
start_api(),
create_address_loop(publisher, task),
modbus_server(),
opcua_server.run()
]
logger.debug("run loop...")
await asyncio.gather(*future)
asyncio.run(main(), debug=False)
This is to get the device tag value and publish it.
async def run_readers(reader, publisher):
while True:
await reader.run(publisher)
class DataReader:
def __init__(self, freq, clients):
self._addresses = []
self._frequency = freq
self._stop_signal = False
self._clients = clients
self.signature = sign_data_reader(self._addresses)
async def run(self, publisher):
while not self._stop_signal:
for addr in self._addresses:
await addr.read()
data = {
"type": "value",
"data": addr._final_value
}
publisher.send_pyobj(data)
if addr._status:
if addr.alarm_log:
return_alarm_log = pbasic.get_log_by_time(addr.alarm_log['date'])
if return_alarm_log:
data = {
"type": "alarm",
"data": return_alarm_log
}
publisher.send_pyobj(data)
self.data_send(addr)
logger.debug("run send data")
await asyncio.sleep(int(self._frequency))
def stop(self):
self._stop_signal = True
modbus server imports
from common import logging
from pymodbus.server.asynchronous import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusSlaveContext, ModbusServerContext
from persistence import service as pservice
from persistence import basic as pbasic
import zmq
import common.config as cfg
import struct
import os
import signal
from datetime import datetime
from twisted.internet.task import LoopingCall
def updating_writer(a):
logger.info("in updates of modbus tcp server.")
context = a[0]
# while True:
if check_pid(os.getppid()) is False:
os.kill(os.getpid(), signal.SIGKILL)
url = ("ipc://{}" .format(cfg.get('ipc', 'pubsub')))
logger.debug("connecting to [%s].", url)
ctx = zmq.Context()
subscriber = ctx.socket(zmq.SUB)
subscriber.connect(url)
subscriber.setsockopt(zmq.SUBSCRIBE, b"")
slave_id = 0x00
msg = subscriber.recv_pyobj()
logger.debug("updates.")
if msg['data']['data_type'] in modbus_server_type and msg['type'] == 'value':
addr = pservice.get_mbaddress_to_write_value(msg['data']['id'])
if addr:
logger.debug(
"local address and length [%s - %s].",
addr['local_address'], addr['length'])
values = get_value_by_type(msg['data']['data_type'], msg['data']['final'])
logger.debug("modbus server updates values [%s].", values)
register = get_register(addr['type'])
logger.debug(
"register [%d] local address [%d] and value [%s].",
register, addr['local_address'], values)
context[slave_id].setValues(register, addr['local_address'], values)
# time.sleep(1)
def tcp_server(pid):
logger.info("Get server configure and device's tags.")
st = datetime.now()
data = get_servie_and_all_tags()
if data:
logger.debug("register address space.")
register_address_space(data)
else:
logger.debug("no data to create address space.")
length = register_number()
store = ModbusSlaveContext(
di=ModbusSequentialDataBlock(0, [0] * length),
co=ModbusSequentialDataBlock(0, [0] * length),
hr=ModbusSequentialDataBlock(0, [0] * length),
ir=ModbusSequentialDataBlock(0, [0] * length)
)
context = ModbusServerContext(slaves=store, single=True)
identity = ModbusDeviceIdentification()
identity.VendorName = 'pymodbus'
identity.ProductCode = 'PM'
identity.VendorUrl = 'http://github.com/bashwork/pymodbus/'
identity.ProductName = 'pymodbus Server'
identity.ModelName = 'pymodbus Server'
identity.MajorMinorRevision = '2.2.0'
# ----------------------------------------------------------------------- #
# set loop call and run server
# ----------------------------------------------------------------------- #
try:
logger.debug("thread start.")
loop = LoopingCall(updating_writer, (context, ))
loop.start(1, now=False)
# process = Process(target=updating_writer, args=(context, os.getpid(),))
# process.start()
address = (data['tcp_ip'], int(data['tcp_port']))
nt = datetime.now() - st
logger.info("modbus tcp server begin has used [%s] s.", nt.seconds)
pservice.write_server_status_by_type('modbus', 'running')
StartTcpServer(context, identity=identity, address=address)
except Exception as e:
logger.debug("modbus server start error [%s].", e)
pservice.write_server_status_by_type('modbus', 'closed')
This is the code I created for the modbus process.
def process_stop(p_to_stop):
global ptcp_flag
pid = p_to_stop.pid
os.kill(pid, signal.SIGKILL)
logger.debug("process has closed.")
ptcp_flag = False
def ptcp_create():
global ptcp_flag
pid = os.getpid()
logger.debug("sentry pid [%s].", pid)
ptcp = Process(target=sertcp.tcp_server, args=(pid,))
ptcp_flag = True
return ptcp
async def modbus_server():
logger.debug("get mosbuc server's status.")
global ptcp_flag
name = 'modbus'
while True:
ser = pservice.get_server_status_by_name(name)
if ser['enabled']:
if ser['tcp_status'] == 'closed' or ser['tcp_status'] == 'running':
tags = pbasic.get_tag_by_name(name)
if len(tags):
if ptcp_flag is False:
logger.debug("[%s] status [%s].", ser['tcp_name'], ptcp_flag)
ptcp = ptcp_create()
ptcp.start()
else:
logger.debug("modbus server is running ...")
else:
logger.debug("no address to create [%s] server.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
else:
logger.debug("[%s] server is running ...", name)
else:
if ptcp_flag:
process_stop(ptcp)
logger.debug("[%s] has been closed.", ser['tcp_name'])
pservice.write_server_status_by_type(name, "closed")
logger.debug("[%s] server not allowed to running.", name)
await asyncio.sleep(5)
This is the command that Docker runs.
/usr/bin/docker run --privileged --network host --name scout-sentry -v /etc/scout.cfg:/etc/scout.cfg -v /var/run:/var/run -v /sys:/sys -v /dev/mem:/dev/mem -v /var/lib/scout:/data --rm shulian/scout-sentry
This is the Docker configuration file /etc/scout.cfg.
[scout]
mode=product
[logging]
level=DEBUG
[db]
path=/data
[ipc]
cs=/var/run/scout-cs.sock
pubsub=/var/run/pubsub.sock
I want to be able to trigger the modbus value update function when there is a message coming from ZeroMQ, and it will be updated correctly.
Let's start from inside out.
Q : ...this will make it impossible for me to connect to my server with the client. I don't know why?
ZeroMQ is a smart broker-less messaging / signaling middleware or better a platform for smart-messaging. In case one feels not so much familiar with the art of Zen-of-Zero as present in ZeroMQ Architecture, one may like to start with ZeroMQ Principles in less than Five Seconds before diving into further details.
The Basis :
The Scalable Formal Communication Archetype, borrowed from ZeroMQ PUB/SUB, does not come at zero-cost.
This means that each infrastructure setup ( both on PUB-side and on SUB-side ) takes some, rather remarkable time and no one can be sure of when the AccessNode cnfiguration results in RTO-state. So the SUB-side (as proposed above) ought be either a permanent entity, or the user shall not expect to make it RTO in zero-time, after a twisted.internet.task.LoopingCall() gets reinstated.
Preferred way: instantiate your (semi-)persistent zmq.Context(), get it configured so as to serve the <aContextInstance>.socket( zmq.PUB ) as needed, a minimum safeguarding setup being the <aSocketInstance>.setsockopt( zmq.LINGER, 0 ) and all transport / queuing / security-handling details, that the exosystem exposes to your code ( whitelisting and secure sizing and resources protection being the most probable candidates - but details are related to your application domain and the risks that you are willing to face being prepared to handle them ).
ZeroMQ strongly discourages from sharing ( zero-sharing ) <aContextInstance>.socket()-instances, yet the zmq.Context()-instance can be shared / re-used (ref. ZeroMQ Principles... ) / passed to more than one threads ( if needed ).
All <aSocketInstance>{.bind()|.connect()}- methods are expensive, so try to setup the infrastructure AccessPoint(s) and their due error-handling way before one tries to use the their-mediated communication services.
Each <aSocketInstance>.setsockopt( zmq.SUBSCRIBE, ... ) is expensive in that it may take ( depending on (local/remote) version ) a form of a non-local, distributed-behaviour - local side "sets" the subscription, yet the remote side has to "be informed" about such state-change and "implements" the operations in line with the actual (propagated) state. While in earlier versions, all messages were dispatched from the PUB-side and all the SUB-side(s) were flooded with such data and were left for "filtering" which will be moved into a local-side internal-Queue, the newer versions "implement" the Topic-Filter on the PUB-side, which further increases the latency of setting the new modus-operandi in action.
Next comes the modus-operandi: how <aSocketInstance>.recv() gets results:
In their default API-state, .recv()-methods are blocking, potentially infinitely blocking, if no messages arrive.
Solution: avoid blocking-forms of calling ZeroMQ <aSocket>.recv()-methods by always using the zmq.NOBLOCK-modes thereof or rather test a presence or absence of any expected-message(s) with <aSocket>.poll( zmq.POLLIN, <timeout> )-methods available, with zero or controlled-timeouts. This makes you the master, who decides about the flow of code-execution. Not doing so, you knowingly let your code depend on external sequence ( or absence ) of events and your architecture is prone to awful problems with handling infinite blocking-states ( or potential unsalvageable many-agents' distributed behaviour live-locks or dead-locks )
Avoid uncontrolled cross-breeding of event-loops - like passing ZeroMQ-driven-loops into an external "callback"-alike handler or async-decorated code-blocks, where the stack of (non-)blocking logics may wreck havoc the original idea just by throwing the system into an unresolvable state, where events miss expected sequence of events and live-locks are unsalvagable or just the first pass happen to go through.
Stacking asyncio-code with twisted-LoopingCall()-s and async/await-decorated code + ZeroMQ blocking .recv()-s is either a Piece-of-Filligrane-Precise-Art-of-Truly-a-Zen-Master, or a sure ticket to Hell - with all respect to the Art-of-Truly-Zen-Masters :o)
So, yes, complex thinking is needed -- welcome to the realms of distributed-computing!

Soapui Groovy - No Signature of Method error on ProWsdlTestSuitePanelBuilder.buildDesktopPanel

I'm working on a script to automate the running of several TestSuites across multiple projects concurrently in SoapUI 4.5.1:
import com.eviware.soapui.impl.wsdl.panels.testsuite.*;
def properties = new com.eviware.soapui.support.types.StringToObjectMap();
def currentProject = testRunner.getTestCase().testSuite.getProject();
def workspace = currentProject.getWorkspace();
def otherProject = workspace.getProjectByName('Project 1');
def otherTestSuite = CGReportsProject.getTestSuiteByName('TestSuite 1');
otherTestSuite.run(properties, true);
However, I'm also attempting to open the TestSuite Panel for each of the TestSuites that are run by the script to allow visual tracking of the Suites' progress. That's where I run into trouble:
ProWsdlTestSuitePanelBuilder.buildDesktopPanel(otherTestSuite);
This particular line throws the error:
groovy.lang.MissingMethodException: No signature of method:
static com.eviware.soapui.impl.wsdl.panels.testsuite.
ProWsdlTestSuitePanelBuilder.buildDesktopPanel() is
applicable for argument types:
(com.eviware.soapui.impl.wsdl.WsdlTestSuitePro) values:
[com.eviware.soapui.impl.wsdl.WsdlTestSuitePro#1d0b2bc6]
Possible solutions:
buildDesktopPanel(com.eviware.soapui.impl.wsdl.WsdlTestSuitePro),
buildDesktopPanel(com.eviware.soapui.model.ModelItem),
buildDesktopPanel(com.eviware.soapui.impl.wsdl.WsdlTestSuite),
buildDesktopPanel(com.eviware.soapui.model.ModelItem),
buildDesktopPanel(com.eviware.soapui.impl.wsdl.WsdlTestSuite),
buildDesktopPanel(com.eviware.soapui.model.ModelItem)
error at line: 12
Which I take to mean that the instance of the WsdlTestSuitePro I'm throwing at ProWsdlTestSuitePanelBuilder.buildDesktopPanel() isn't being accepted for some reason - but I've no idea why.
At this point, I'm also not sure if the ProWsdlTestSuitePanelBuilder.buildDesktopPanel() is really what I want anyway, but it's the only UI builder that'll take a WsdlTestSuitePro, as that apparently what all my Testsuites are.
Okay, so this falls under the newbie catagory. I wasn't paying attention to the fact that buildDesktopPanel was static.
However, I managed to work around that and create the final product:
// Create a UISupport container for all the panels we'll be showing
def UIDesktop = new com.eviware.soapui.support.UISupport();
// Basic environment information
def properties = new com.eviware.soapui.support.types.StringToObjectMap();
def currentProject = testRunner.getTestCase().testSuite.getProject();
def workspace = currentProject.getWorkspace();
// Get the various Projects we'll be using
def OtherProject = workspace.getProjectByName('Other Project');
// Get the various TestSuites we'll be running
def OtherTestSuite = OtherProject.getTestSuiteByName('Other Test Suite');
// Generate the Panels for the Testsuites
def TestSuitePanel = new com.eviware.soapui.impl.wsdl.panels.testsuite.ProWsdlTestSuiteDesktopPanel(OtherTestSuite);
// Show TestSuite Panels
UIDesktop.showDesktopPanel(TestSuitePanel);
// Run the Testsuites
OtherTestSuite.run(properties, true);

Resources