I was trying to compile a code which interfaces with Cancasexl to send and receive signals from ECU. The library which I am using is python-can.
But I am getting an error which says:
Could not import vxlapi: function 'xlCanReceive' not found
raise ImportError("The Vector API has not been loaded")
ImportError: The Vector API has not been loaded
I tried updating the CancaseXL drivers from the vector website thinking it might be a driver issue but still it shows up.
def Can_receive_all(self):
bus = can.interface.Bus(bustype='vector', channel=self.ch_list, bitrate=500000, app_name=self.can_app_name)
try:
while True:
recv_msg = bus.recv()
print(recv_msg)
# if recv_msg is not None:
# self.print_can_data(recv_msg)
except Exception as ex:
print(ex)
I expected the Rx signals from the ECU but getting an error.
When reading the Python-CAN documentation, it appears that you must have the Vector Hardware Config program in order to use the CANCase with Python-CAN. The documentation states the following :
Vector
This interface adds support for CAN controllers by Vector.
By default this library uses the channel configuration for CANalyzer. To use a
different application, open Vector Hardware Config program and create a new
application and assign the channels you may want to use. Specify the application name
as app_name='Your app name' when constructing the bus or in a config file.
Channel should be given as a list of channels starting at 0.
Here is an example configuration file connecting to CAN 1 and CAN 2 for an
application named “python-can”:
[default]
interface = vector
channel = 0, 1
app_name = python-can
Related
Edited to correct some typos:
Hi I’m new to opcua-asyncio package so please be patient if I missed something really obvious, I’m trying to figure out how to connect to a Siemens S7 1200 CPU 1212C firmware version 14 PLC from Python (3.11) and read some variables values.
I’m sorry I can’t figure out a way to support my case with a reproducible example because it’s intrinsically connected to the test PLC I’m working with, if you can suggest a way to make it reproducible I’m going to amend the answer accordingly.
The structure on my test PLC is detailed in the picture, I can access it through different clients, the variables from t1 to t19 are my goal.
[PLC Structure][1]
I went through the documentation (especially the client minimal example ) and some stackoverflow answers and I wrote the following code
# modules
import asyncio
from asyncua import Client
# url and namespace
url = foo bar # the plc location on the network
namespace = "OPC-UA:PLC_1"
t2 =[]
# I can find the objects inside the node
async with Client(url=url) as client:
# Find the namespace index
nsidx = await client.get_namespace_index("urn:OPC-UA:PLC_1")
print(nsidx)
root = client.nodes.root
print("Root node is: ", root)
objects = client.nodes.objects
print("Objects node is: ", objects)
# I can’t find my variables
var = await client.nodes.root.get_child(
["0:Objects", "3:ServerInterfaces", "0:Face"])
print("Var is: ", var)
t2 = await var.get_children_descriptions()
print(t2)
By my understanding of the structure in OPC-UA:PLC_1 namespace I should find an “Objects” node object listing a “ServerInterfaces” node object listing a “Face” node object listing the variables from t1 to t19. However if I ask “Objects” to describe its childs through get_children_descriptions() I can see the ServerInterfaces node object however this Serverinterfaces appears to be named “Face”.
If I ask for the childrens of “Face” I receive a BadNoMatch error.
Any help pointing me to the right direction would be much appreciated. Thank you!
[1]: https://i.stack.imgur.com/QyQCJ.png
I think the Browsename namespace index of Face is wrong.
The index 0 is used for the base nodeset.
You can check the correct index with an other UA Client like the UAExpert (here you need to look at the attibute window not the address space window (which do not show the namespace index)).
If you defined the Face in the face "OPC-UA:PLC_1" the line should be:
var = await client.nodes.root.get_child(
["0:Objects", "3:ServerInterfaces", "3:Face"])
I have an XGBoost model currently in production using AWS sagemaker and making real time inferences. After a while, I would like to update the model with a newer one trained on more data and keep everything as is (e.g. same endpoint, same inference procedure, so really no changes aside from the model itself)
The current deployment procedure is the following :
from sagemaker.xgboost.model import XGBoostModel
from sagemaker.xgboost.model import XGBoostPredictor
xgboost_model = XGBoostModel(
model_data = <S3 url>,
role = <sagemaker role>,
entry_point = 'inference.py',
source_dir = 'src',
code_location = <S3 url of other dependencies>
framework_version='1.5-1',
name = model_name)
xgboost_model.deploy(
instance_type='ml.c5.large',
initial_instance_count=1,
endpoint_name = model_name)
Now that I updated the model a few weeks later, I would like to re-deploy it. I am aware that the .deploy() method creates an endpoint and an endpoint configuration so it does it all. I cannot simply re-run my script again since I would encounter an error.
In previous versions of sagemaker I could have updated the model with an extra argument passed to the .deploy() method called update_endpoint = True. In sagemaker >=2.0 this is a no-op. Now, in sagemaker >= 2.0, I need to use the predictor object as stated in the documentation. So I try the following :
predictor = XGBoostPredictor(model_name)
predictor.update_endpoint(model_name= model_name)
Which actually updates the endpoint according to a new endpoint configuration. However, I do not know what it is updating... I do not specify in the above 2 lines of code that we need to considering the new xgboost_model trained on more data... so where do I tell the update to take a more recent model?
Thank you!
Update
I believe that I need to be looking at production variants as stated in their documentation here. However, their whole tutorial is based on the amazon sdk for python (boto3) which has artifacts that are hard to manage when I have difference entry points for each model variant (e.g. different inference.py scripts).
Since I found an answer to my own question I will post it here for those who encounter the same problem.
I ended up re-coding all my deployment script using the boto3 SDK rather than the sagemaker SDK (or a mix of both as some documentation suggest).
Here's the whole script that shows how to create a sagemaker model object, an endpoint configuration and an endpoint to deploy the model on for the first time. In addition, it shows how to update the endpoint with a newer model (which was my main question)
Here's the code to do all 3 in case you want to bring your own model and update it safely in production using sagemaker :
import boto3
import time
from datetime import datetime
from sagemaker import image_uris
from fileManager import * # this is a local script for helper functions
# name of zipped model and zipped inference code
CODE_TAR = 'your_inference_code_and_other_artifacts.tar.gz'
MODEL_TAR = 'your_saved_xgboost_model.tar.gz'
# sagemaker params
smClient = boto3.client('sagemaker')
smRole = <your_sagemaker_role>
bucket = sagemaker.Session().default_bucket()
# deploy algorithm
class Deployer:
def __init__(self, modelName, deployRetrained=False):
self.modelName=modelName
self.deployRetrained = deployRetrained
self.prefix = <S3_model_path_prefix>
def deploy(self):
'''
Main method to create a sagemaker model, create an endpoint configuration and deploy the model. If deployRetrained
param is set to True, this method will update an already existing endpoint.
'''
# define model name and endpoint name to be used for model deployment/update
model_name = self.modelName + <any_suffix>
endpoint_config_name = self.modelName + '-%s' %datetime.now().strftime('%Y-%m-%d-%HH%M')
endpoint_name = self.modelName
# deploy model for the first time
if not self.deployRetrained:
print('Deploying for the first time')
# here you should copy and zip the model dependencies that you may have (such as preprocessors, inference code, config code...)
# mine were zipped into the file called CODE_TAR
# upload model and model artifacts needed for inference to S3
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create sagemaker model and endpoint configuration
self.createSagemakerModel(model_name)
self.createEndpointConfig(endpoint_config_name, model_name)
# deploy model and wait while endpoint is being created
self.createEndpoint(endpoint_name, endpoint_config_name)
self.waitWhileCreating(endpoint_name)
# update model
else:
print('Updating existing model')
# upload model and model artifacts needed for inference (here the old ones are replaced)
# make sure to make a backup in S3 if you would like to keep the older models
# we replace the old ones and keep the same names to avoid having to recreate a sagemaker model with a different name for the update!
uploadFile(list_files=[MODEL_TAR, CODE_TAR], prefix = self.prefix)
# create a new endpoint config that takes the new model
self.createEndpointConfig(endpoint_config_name, model_name)
# update endpoint
self.updateEndpoint(endpoint_name, endpoint_config_name)
# wait while endpoint updates then delete outdated endpoint config once it is InService
self.waitWhileCreating(endpoint_name)
self.deleteOutdatedEndpointConfig(model_name, endpoint_config_name)
def createSagemakerModel(self, model_name):
'''
Create a new sagemaker Model object with an xgboost container and an entry point for inference using boto3 API
'''
# Retrieve that inference image (container)
docker_container = image_uris.retrieve(region=region, framework='xgboost', version='1.5-1')
# Relative S3 path to pre-trained model to create S3 model URI
model_s3_key = f'{self.prefix}/'+ MODEL_TAR
# Combine bucket name, model file name, and relate S3 path to create S3 model URI
model_url = f's3://{bucket}/{model_s3_key}'
# S3 path to the necessary inference code
code_url = f's3://{bucket}/{self.prefix}/{CODE_TAR}'
# Create a sagemaker Model object with all its artifacts
smClient.create_model(
ModelName = model_name,
ExecutionRoleArn = smRole,
PrimaryContainer = {
'Image': docker_container,
'ModelDataUrl': model_url,
'Environment': {
'SAGEMAKER_PROGRAM': 'inference.py', #inference.py is at the root of my zipped CODE_TAR
'SAGEMAKER_SUBMIT_DIRECTORY': code_url,
}
}
)
def createEndpointConfig(self, endpoint_config_name, model_name):
'''
Create an endpoint configuration (only for boto3 sdk procedure) and set production variants parameters.
Each retraining procedure will induce a new variant name based on the endpoint configuration name.
'''
smClient.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
'VariantName': endpoint_config_name,
'ModelName': model_name,
'InstanceType': INSTANCE_TYPE,
'InitialInstanceCount': 1
}
]
)
def createEndpoint(self, endpoint_name, endpoint_config_name):
'''
Deploy the model to an endpoint
'''
smClient.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
def deleteOutdatedEndpointConfig(self, name_check, current_endpoint_config):
'''
Automatically detect and delete endpoint configurations that contain a string 'name_check'. This method can be used
after a retrain procedure to delete all previous endpoint configurations but keep the current one named 'current_endpoint_config'.
'''
# get a list of all available endpoint configurations
all_configs = smClient.list_endpoint_configs()['EndpointConfigs']
# loop over the names of endpoint configs
names_list = []
for config_dict in all_configs:
endpoint_config_name = config_dict['EndpointConfigName']
# get only endpoint configs that contain name_check in them and save names to a list
if name_check in endpoint_config_name:
names_list.append(endpoint_config_name)
# remove the current endpoint configuration from the list (we do not want to detele this one since it is live)
names_list.remove(current_endpoint_config)
for name in names_list:
try:
smClient.delete_endpoint_config(EndpointConfigName=name)
print('Deleted endpoint configuration for %s' %name)
except:
print('INFO : No endpoint configuration was found for %s' %endpoint_config_name)
def updateEndpoint(self, endpoint_name, endpoint_config_name):
'''
Update existing endpoint with a new retrained model
'''
smClient.update_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
RetainAllVariantProperties=True)
def waitWhileCreating(self, endpoint_name):
'''
While the endpoint is being created or updated sleep for 60 seconds.
'''
# wait while creating or updating endpoint
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
while status != 'InService' and status !='Failed':
time.sleep(60)
status = smClient.describe_endpoint(EndpointName=endpoint_name)['EndpointStatus']
print('Status: %s' %status)
# in case of a deployment failure raise an error
if status == 'Failed':
raise ValueError('Endpoint failed to deploy')
if __name__=="__main__":
deployer = Deployer('MyDeployedModel', deployRetrained=True)
deployer.deploy()
Final comments :
The sagemaker documentation mentions all this but fails to state that you can provide an 'entry_point' to the create_model method as well as a 'source_dir' for inference dependencies (e.g. normalization artifacts). It can be done as seen in PrimaryContainer argument.
my fileManager.py script just contains basic functions to make tar files, upload and download to and from my S3 paths. To simplify the class, I have not included them in.
The method deleteOutdatedEndpointConfig may seem like a bit of an overkill with unnecessary loops and checks, I do so because I have multiple endpoint configurations to handle and wanted to remove the ones that weren't live AND contain the string name_check (I do not know the exact name of the configuration since there is a datetime suffix). Feel free to simplify it or remove it all together.
Hope it helps.
In your model_name you specify the name of a SageMaker Model object where you can specify the image_uri, model_data etc.
I'm using OpenTelemetry to trace my service running on Azure.
Azure Application Map is showing incorrect name for the components sending the trace. It is also sending incorrect Cloud RoleName which is probably why this is happening. The Cloud RoleName that App Insights is displaying is Function App name and not the Function name.
In my Azure Function (called FirewallCreate), I start a trace using following util method:
def get_otel_new_tracer(name=__name__):
# Setup a TracerProvider(). For more details read:
# https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=python#set-the-cloud-role-name-and-the-cloud-role-instance
trace.set_tracer_provider(
TracerProvider(
resource=Resource.create(
{
SERVICE_NAME: name,
SERVICE_NAMESPACE: name,
# SERVICE_INSTANCE-ID: "my-instance-id"
}
)
)
)
# Send messages to Exporter in batches
span_processor = BatchSpanProcessor(
AzureMonitorTraceExporter.from_connection_string(
os.environ["TRACE_APPINSIGHTS_CONNECTION_STRING"]
)
)
trace.get_tracer_provider().add_span_processor(span_processor)
return trace.get_tracer(name)
def firewall_create():
tracer = get_otel_new_tracer("FirewallCreate")
with tracer.start_new_span("span-firewall-create") as span:
# do some stuff
...
The traces show another function name, in the same function app (picture attached).
What mistake I could be making?
The component has 0 ms and 0 calls in it. What does it mean? How to interpret it?
Hey thanks for trying out OpenTelemetry with Application Insights! First of all, are you using the azure-monitor-opentelemetry-exporter, and if so, what version are you using? I've also answered your questions inline:
What mistake I could be making?
What name are you trying to set your component to? The exporter will attempt to set your cloudRoleName to <service_namespace>.<service_name> by default that you set in your Resource. If <service_namespace> is not populated, it takes the value of <service_name>.
The component has 0 ms and 0 calls in it. What does it mean? How to interpret it?
Is this the only node in your application map? We need to first find whether this node is created by the exporter or by the functions runtime itself.
I have the following in my keras file at the top, I expect execution to be faster with this optimisation, is my assumption correct. I am running on an 8 core machine
NUM_PARALLEL_EXEC_UNITS =8
config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, inter_op_parallelism_thr
eads=2, allow_soft_placement=True, device_count = {'CPU': NUM_PARALLEL_EXEC_UNITS })
session = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(session)
os.environ["OMP_NUM_THREADS"] = str(NUM_PARALLEL_EXEC_UNITS)
os.environ["KMP_BLOCKTIME"] = "30"
os.environ["KMP_SETTINGS"] = "1"
os.environ["KMP_AFFINITY"]= "granularity=fine,verbose,compact,1,0"
However when I run with this configuration, I still get the following message
WARNING:tensorflow:From rfbyolov3withextralayers.py:43: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.
2020-04-14 18:18:25.656076: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Can you please let me know why the warning still appears and also why the config values are ignored.
Each iteration is taking the same amoung of time with and without this optimization code
I guess non of this v1 compat methods work.
I have achieved what I wanted to by doing this
tf.config.threading.set_inter_op_parallelism_threads(4)
I am trying to combine multiple logic adapters in Python chatterbot. I cannot seem to get it right. I tried this:
english_bot = ChatBot("English Bot",
storage_adapter="chatterbot.storage.SQLStorageAdapter",
multi_logic_adapter = [
"chatterbot.logic.MathematicalEvaluation",
"chatterbot.logic.TimeLogicAdapter",
"chatterbot.logic.BestMatch"]
)
Only BestMatch seems to be active
And I tried this:
english_bot = ChatBot("English Bot",
storage_adapter="chatterbot.storage.SQLStorageAdapter",
logic_adapter = [
"chatterbot.logic.multi_adapter.MultiLogicAdapter",
"chatterbot.logic.MathematicalEvaluation",
"chatterbot.logic.TimeLogicAdapter",
"chatterbot.logic.BestMatch"]
)
But I get this error: AttributeError: 'NoneType' object has no attribute 'confidence' and none of the logic_adapters seems to be active.
Thanks,
Herb
BestMatch
Adapter is the default adapter for chatterbot, You don't need explicitly specify that. More information http://chatterbot.readthedocs.io/en/stable/logic/index.html#best-match-adapter
And you code should like this
# -*- coding: utf-8 -*-
from chatterbot import ChatBot
bot = ChatBot(
"English Bot",
logic_adapters=[
"chatterbot.logic.MathematicalEvaluation",
"chatterbot.logic.TimeLogicAdapter"
]
)
# Print an example of getting one math based response
response = bot.get_response("What is 4 + 9?")
print(response)
# Print an example of getting one time based response
response = bot.get_response("What time is it?")
print(response)
Every logic adapter in logic_adapters=[] is automatically processed by MultiLogicAdapter. You might need to tweak the confidence levels though.
More info about the MultiLogicAdapter here:
http://chatterbot.readthedocs.io/en/stable/logic/multi-logic-adapter.html
Multi Logic adapter is an inbuilt class and is not explicitly defined in the code.You can see this statement in the introduction part:"ChatterBot internally uses a special logic adapter that allows it to choose the best response generated by any number of other logic adapters" This is the link - http://chatterbot.readthedocs.io/en/stable/logic/multi-logic-adapter.html
Also, similar query is already available on stackover flow. Refer this as well.
Error while using chatterbot
The MultiLogicAdapter typically doesn't get used directly in this way.
Each logic adapter that you add to the logic_adapters=[] will get processed by the MultiLogicAdapter internally by ChatterBot, no need to explicitly specify it.