I have the following in my keras file at the top, I expect execution to be faster with this optimisation, is my assumption correct. I am running on an 8 core machine
NUM_PARALLEL_EXEC_UNITS =8
config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, inter_op_parallelism_thr
eads=2, allow_soft_placement=True, device_count = {'CPU': NUM_PARALLEL_EXEC_UNITS })
session = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(session)
os.environ["OMP_NUM_THREADS"] = str(NUM_PARALLEL_EXEC_UNITS)
os.environ["KMP_BLOCKTIME"] = "30"
os.environ["KMP_SETTINGS"] = "1"
os.environ["KMP_AFFINITY"]= "granularity=fine,verbose,compact,1,0"
However when I run with this configuration, I still get the following message
WARNING:tensorflow:From rfbyolov3withextralayers.py:43: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.
2020-04-14 18:18:25.656076: I tensorflow/core/common_runtime/process_util.cc:147] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Can you please let me know why the warning still appears and also why the config values are ignored.
Each iteration is taking the same amoung of time with and without this optimization code
I guess non of this v1 compat methods work.
I have achieved what I wanted to by doing this
tf.config.threading.set_inter_op_parallelism_threads(4)
Related
We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
BatchRequestImpl(1.38.0) - line 295
BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.
When I lookup for transform like below,
(trans, rot) = self.listener.lookupTransform(\
self.robot_link, target_link, rospy.Time(0))
Error pops up:Lookup would require extrapolation into the past.
However, when I change this line to
(trans, rot) = self.listener.lookupTransform(\
self.robot_link, target_link, rospy.Time.now())
Error processing request: Lookup would require extrapolation into the future.
How can I solve this problem?
You can wait for the transformation to become available for a specific duration as described in the tf and Time ROS tutorial:
now = rospy.Time.now()
listener.waitForTransform(self.robot_link, target_link, now, rospy.Duration(4.0))
(trans,rot) = listener.lookupTransform(self.robot_link, target_link, now)
This blocks and waits for your specified duration until the transform you want to query is available.
You might want to add some handling of the case that the transform does not get published in the wait duration.
I was trying to compile a code which interfaces with Cancasexl to send and receive signals from ECU. The library which I am using is python-can.
But I am getting an error which says:
Could not import vxlapi: function 'xlCanReceive' not found
raise ImportError("The Vector API has not been loaded")
ImportError: The Vector API has not been loaded
I tried updating the CancaseXL drivers from the vector website thinking it might be a driver issue but still it shows up.
def Can_receive_all(self):
bus = can.interface.Bus(bustype='vector', channel=self.ch_list, bitrate=500000, app_name=self.can_app_name)
try:
while True:
recv_msg = bus.recv()
print(recv_msg)
# if recv_msg is not None:
# self.print_can_data(recv_msg)
except Exception as ex:
print(ex)
I expected the Rx signals from the ECU but getting an error.
When reading the Python-CAN documentation, it appears that you must have the Vector Hardware Config program in order to use the CANCase with Python-CAN. The documentation states the following :
Vector
This interface adds support for CAN controllers by Vector.
By default this library uses the channel configuration for CANalyzer. To use a
different application, open Vector Hardware Config program and create a new
application and assign the channels you may want to use. Specify the application name
as app_name='Your app name' when constructing the bus or in a config file.
Channel should be given as a list of channels starting at 0.
Here is an example configuration file connecting to CAN 1 and CAN 2 for an
application named “python-can”:
[default]
interface = vector
channel = 0, 1
app_name = python-can
I am using Hazelcast "3.6.3" on scala 2.11.8
I have written this code.
val config = new Config("mycluster")
config.getNetworkConfig.getJoin.getMultcastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getMulticastConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getAwsConfig.setEnabled(false)
config.getNetworkConfig.getJoin.getTcpIpConfig.setMembers(...)
config.getNetworkConfig.getJoin.getTcpIpConfig.setEnabled(true)
val hc = Hazelcast.newHazelcastInstance(config)
hc.getConfig.addMapConfig(new MapConfig()
.setName("foo")
.setBackupCount(1)
.setTimeToLiveSeconds(3600)
.setAsyncBackupCount(1)
.setInMemoryFormat(InMemoryFormat.BINARY)
.setMaxSizeConfig(new MaxSizeConfig(1, MaxSizePolicy.USED_HEAP_SIZE))
)
hc.putValue[(String, Int)]("foo", "1", ("foo", 10))
I notice that when 1 hour is over hazelcast does not remove the items from the cache. The items seem to be living forever in the cache.
I don't want sliding expiration. I want absolute expiration this means that after 1 hour the item has to be kicked out no matter how many times it was accessed during the hour.
I have done required googling and I think my code above is correct. But when I look at my server logs, I am pretty sure that nothing is removed from the cache.
Sorry I am not a scala guy. But can you explain what does hc.addTimeToLiveMapConfig does?
Normally you need to add the TTL config into Config object before starting Hazelcast.
I believe in your case, you are starting Hazelcast and only then updating the config with TTL. Please try with reverse order.
If you don't like to add this to the configuration, there is an overloaded map.put method that takes TTL as an input. This way you can specify TTL per entry.
I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)