Tf package extrapolation to the past - transform

When I lookup for transform like below,
(trans, rot) = self.listener.lookupTransform(\
self.robot_link, target_link, rospy.Time(0))
Error pops up:Lookup would require extrapolation into the past.
However, when I change this line to
(trans, rot) = self.listener.lookupTransform(\
self.robot_link, target_link, rospy.Time.now())
Error processing request: Lookup would require extrapolation into the future.
How can I solve this problem?

You can wait for the transformation to become available for a specific duration as described in the tf and Time ROS tutorial:
now = rospy.Time.now()
listener.waitForTransform(self.robot_link, target_link, now, rospy.Duration(4.0))
(trans,rot) = listener.lookupTransform(self.robot_link, target_link, now)
This blocks and waits for your specified duration until the transform you want to query is available.
You might want to add some handling of the case that the transform does not get published in the wait duration.

Related

Update a parameter value in Brightway

It seems to be a simple question but I have a hard time to find an answer to it. I already have a project with several parameters (project and database parameters). I would like to obtain the LCA results for several scenarios with my parameters having different values each time. I was thinking of the following simple procedure:
change the parameters' value,
update the exchanges in my project,
calculate the LCA results.
I know that the answer should be in the documentation somewhere, but I have a hard time to understand how I should apply it to my ProjectParameters, DatabaseParameters and ActivityParameters.
Thanks in advance!
EDIT: Thanks to #Nabla, I was able to come up with this:
For ProjectParameter
for pjparam in ProjectParameter.select():
if pjparam.name=='my_param_name':
break
pjparam.amount = 3
pjparam.save()
bw.parameters.recalculate()
For DatabaseParameter
for dbparam in DatabaseParameter.select():
if dbparam.name=='my_param_name':
break
dbparam.amount = 3
dbparam.save()
bw.parameters.recalculate()
For ActivityParameter
for param in ActivityParameter.select():
if param.name=='my_param_name':
break
param.amount = 3
param.save()
param.recalculate_exchanges(param.group)
You could import DatabaseParameter and ActivityParameter iterate until you find the parameter you want to change, update the value, save it and recalculate the exchanges. I think you need to do it in tiers. First you update the project parameters (if any) then the database parameters that may depend on project parameters and then the activity parameters that depend on them.
A simplified case without project parameters:
from bw2data.parameters import ActivityParameter,DatabaseParameter
# find the database parameter to be updated
for dbparam in DatabaseParameter.select():
if (dbparam.database == uncertain_db.name) and (dbparam.name=='foo'):
break
dbparam.amount = 3
dbparam.save()
#there is also this method if foruma depend on something else
#dbparam.recalculate(uncertain_db.name)
# here updating the exchanges of a particular activity (act)
for param in ActivityParameter.select():
if param.group == ":".join(act.key):
param.recalculate_exchanges(param.group)
you may want to update all the activities in the project instead of a single one like in the example. you just need to change the condition when looping through the activity parameters.

How to get SAP CloudSdk BatchRequest not to ignore filter parameter on Batch Query?

We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
  BatchRequestImpl(1.38.0) - line 295
  BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.

Hazelcast Jet: How to prevent 'event dropped'?

I am getting 'Event dropped, late by 5051 ms.'
How should I build my pipeline that all events are processed, regardless of their late arrival.
I have tried several approaches. Basically, what I tried was
Without windowing where I didn't get late events, but this is not applicable for me due to parallel execution and values in sink get overriden instead of merged.
Therefore I used windowing which solved my overriding problem, but caused late events.
Next, I tried to use windowing without timestamp, which throwed exception that timestamp must be defined.
Basically I have 2 problems here: 1) how to merge new event to existing ones in sink 2) without dropping events or overriding.
Code:
WindowDefinition customWindow = WindowDefinition.sliding(60000, 30000);
customWindow.setEarlyResultsPeriod(1000);
StreamStage<Map.Entry<...>> updatedState = p
.drawFrom(<source>)
.withIngestionTimestamps()
.groupingKey(...)
.window(customWindow)
.aggregate(AggregateOperations.toCollection(ArrayList::new))
.mapUsingIMap(...)
.sink(...)

Right way to delete and then reindex ES documents

I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)

Determine if a cucumber scenario has pending steps

I would like to retrieve the scenario state in the "After" scenario hook. I noticed that the .failed? method does not consider pending steps as failed steps.
So How can I determine that a scenario did not execute completely, because it failed OR because some steps were not implemented/defined.
You can use status method. The default value of status is :skipped, the failed one is :failed and the passed step is :passed. So you can write something like this:
do sth if step.status != :passed
Also, if you use !step.passed? it does the same thing because it only checks for the :passed status.
http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html#failed%3F-instance_method
On that subject, you can also take a look at this post about demoing your feature specs to your customers: http://multifaceted.io/2013/demo-feature-tests/
LiohAu, you can use the 'status' method on a scenario itself rather than on individual steps. Try this: In hooks, add
After do |scenario|
p scenario.status
end
This will give the statuses as follows:
Any step not implemented / defined, it'll give you :undefined
Scenario fails (when all steps are defined) :failed
Scenario passes :passed
Using the same hook, it'll give you the status for scenario outline, but for each example row (since for each example row, it is an individual scenario). So if at all you want the result of an entire outline, you'll need to capture result for all example rows and compute the final result accordingly.
Hope this helps.

Resources