How to translate error messages in Colander - python-3.x

How can I translate the error messages from the colander validators? The documentation just says that it's possible.
def valid_text(node, value):
raise Invalid(node, u"Some error message")
class form(colander.MappingSchema):
name = colander.SchemaNode(colander.String(), validator=valid_text)
I know that deform does it already but I need to use colander on his own.

According to the API documentation, the msg argument to Invalid can be a translation string instance. Information on working with translation strings is here.

Looks like this issue was already addressed and fixed, but it will be part of the next release. I've just added the changes from commit f6be836 and it works like a charm.

Related

dot notation and accessing module functions using strings

pretty new to python, so i tried googling around for answers but i was probably googling the wrong terminology.
So here's my problem
i have this
objectName ="Account"
sf.bulk.objectName.upsert(dataToSalesforce,externalIdField, batch_size=10000)
The above command should send an upsert request for salesforce upserting on the account object, but it gives me the error {'exceptionCode': 'InvalidJob', 'exceptionMessage': 'Unable to find object: objectName'}
the problem is it tried to query the object objectName and not Account.
Everything works fine when i use: sf.bulk.Account.upsert(dataToSalesforce,externalIdField, batch_size=10000) but in the current use case the object being upserted to may change.
When looking at errors i noticed it used the term attribute, and googling using that term i found i could do this
getattr(sf.bulk, objectName).upsert(dataToSalesforce, externalIdField,batch_size=10000, use_serial=True)
which solves the issue

How Can I Check For The Existence of a UUID

I am trying to check for the existence of a UUID as a primary key in my Django environment...and when it exists...my code works fine...But if it's not present I get a "" is not a Valid UUID...
Here's my code....
uuid_exists = Book.objects.filter(id=self.object.author_pk,is_active="True").first()
I've tried other variations of this with .exists() or .all()...but I keep getting the ['“” is not a valid UUID.'] error.
I did come up with a workaround....
if self.object.author_pk is not '':
book_exists = Book.objects.filter(id=self.object.author_pk,is_active="True").first()
context['author_exists'] = author_exists
Is this the best way to do this? I was hoping to be able to use a straight filter...without clarifying logic....But I've worked all afternoon and can't seem to come up with anything better. Thanks in advance for any feedback or comments.
I've had the same issue and this is what I have:
Wrapping it into try/except (in my case it's a View so it's supposed to return a Response object)
try:
object = Object.objects.get(id=object_id)
except Exception as e:
return Response(data={...}, status=status.HTTP_40...
It gets to the exception (4th line) but somehow sends '~your_id~' is not a valid UUID. text instead of proper data. Which might be enough in some cases.
This seems like an overlook, so might as well get a fix soon. I don't have enough time to investigate deeper, unfortunately.
So the solution I came up with is not ideal either but hopefully is a bit cleaner and faster than what you're using rn.
# Generate a list of possible object IDs (make use of filters in order to reduce the DB load)
possible_ids = [str(id) for id in Object.objects.filter(~ filters here ~).values_list('id', flat=True)]
# Return an error if ID is not valid
if ~your_id~ not in possible_ids:
return Response(data={"error": "Database destruction sequence initialized!"}, status=status.HTTP_401_UNAUTHORIZED)
# Keep working with the object
object = Objects.objects.get(id=object_id)

Strapi debug: Error on attribute departure in model

today when I try to run my Strapi with some exercises, there was an error showing that inversedBy attribute flight not found target api::airport.airport. However, the command shows Admin UI was built successfully. but I cannot access the Admin panel and do anything with it. It seems that the error is belonging to one of the content, but the entire API is not working. What should I do? Does anyone know how to fix this bug?
enter image description here
Thank you.
Firstly, I tried to run the start command(npm run develop) for several time, it keep reporting same error.
Secondly, I tried to access the administration panel directly, it is apparently that I failed.
Hopes someone can help me to figure out, how can I solve this bug/error.
I had a similar error.
The issue for me related to a problem where the 'key'(i.e. attribute key in JSON) didn't match that was referenced by the model in the mappedBy & inversedBy.
e.g. mappedBy:"f_light" should point to
"f_light":{type:"relation",...) --
At least that was the problem for me
Strapi Docs on how the schema is supposed to look
My issue: Error on attribute a_token in model a-request(api::a-request.a-request): inversedBy attribute a-requests not found target api::a-token.a-token
This occurred because I inversedBy:'a-token' when the attribute key was 'a_token'. Changing them so they matched solved my issue ('a-token' -> 'a_token').
The naming conventions of mappedBy, inversedBy, and the attribute keys MUST use '_' instead of '-' for spaces, otherwise it will fail the naming convention tests.

How to get SAP CloudSdk BatchRequest not to ignore filter parameter on Batch Query?

We are currently struggeling with Batch Query,
which seems to ignore the filter expressions on S4 side caused by a wrong URL encoding.
/sap/opu/odata/sap/ZP2M_A_CONTRACT_SEARCH_HDR_CDS/ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID eq %274600002020%27&$select=*&$format=json
Executing the query using FluentHelperRead.execute(HttpClient)
the returned list of entities contains the expected result with exactly one entity.
Executing the query as Batch Query the following request is logged in console:
GET ZP2M_A_CONTRACT_SEARCH_HDR?%24filter%3DPurchaseContractID+eq+%25274600002020%2527%26%24select%3D*%26%24format%3Djson HTTP/1.1
The collected list from all batch result parts contains all entities.
It seems, that the query URL is encoded in wrong way
and that S4 ignored the filter expressions when encoded in this way.
e.g. $filter is encoded to %24filter which is ignored by S4.
This seems to be a bug in BatchRequestImpl.getRequest(ODataQueryImpl) method,
where URL encoding is done a 2nd time on already encoded URL parts.
if(systemQuery.indexOf("$format=json&$count=true") != -1)
{
systemQuery = systemQuery.substring(0, systemQuery.indexOf("$format=json&$count=true") -1);
keysUrl.append("/$count");
}
systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); // this code line which encodes the query 2nd time
keysUrl.append("?");
The code line systemQuery = URLEncoder.encode(systemQuery, "UTF-8"); located in
  BatchRequestImpl(1.38.0) - line 295
  BatchRequestImpl(1.42.2) - line 307
encodes the systemQuery string again (including the already encoded parts of FilterExpression as well).
When undoing the changes of this code line in debugger and replacing the scapces by %20 or '+' the Batch Query looks like that
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID%20eq%20%274600002020%27&$select=*&$format=json HTTP/1.1
GET ZP2M_A_CONTRACT_SEARCH_HDR?$filter=PurchaseContractID+eq+%274600002020%27&$select=*&$format=json HTTP/1.1
and it returns the expected result (exactly 1 entity).
This wrong encoding appears when using these library versions:
sdk-bom: 3.16.1
connectivity: 1.38.0
This issue appears in newest SDK versions as well:
sdk-bom: 3.21.0
connectivity: 1.39.0
This issue appears with connectivity JAR in newest version too:
sdk-bom: 3.21.0
connectivity: 1.40.2
Debugging together with a ABAP/S4 colleague figures out,
that S4 only applies filter expressions, if the keyword $filter is found in request,
%24filter%3D is ignored (the cause why we get all entities running the Batch Query).
My suggestion to solve it would be
// decode query first (to decode the filter expression)
systemQuery = URLDecoder.decode(systemQuery, "UTF_8");
// encode query
systemQuery = org.apache.commons.httpclient.util.URIUtil.encodeQuery(systemQuery, "UTF_8");
My code, how I am calling the batchRequest:
FluentHelperRead<?, MyEntity, ?> queryApi = myService.getAll... // with adding some filter expression
BatchRequestBuilder batchRequestBuilder = BatchRequestBuilder.withService(MyService.DEFAULT_SERVICE_PATH);
ODataQuery query = queryApi.toQuery();
batchRequestBuilder.addQueryRequest(query);
HttpClient httpClient = HttpClientAccessor
.getHttpClient(DefaultErpHttpDestinationAccessor.get());
BatchRequest request = batchRequestBuilder.build();
BatchResult result = request.execute(httpClient);
// ... evaluate response
I think, this is a general issue in the Cloud SDK.
Would is be possible to get this fixed in next Cloud SDK release?
Can you share your code for Batch request? Do you use BatchRequestImpl directly?
The thing is SAP Cloud SDK relies on some dependencies one of which introduces the BatchRequestImpl and if it's called directly the bug is on the dependency side. I have already informed them to investigate this double encoding issue. Unfortunately, we can't directly influence how fast it is resolved and sometimes it takes longer than we'd like.
The good news, we're working on replacing this dependency with our own implementation to solve exactly this kind of problem. The batch is work in progress and should be available in Beta around the end of next month for OData V4 and hopefully around the same time for OData V2 (it's not a hard commitment and depends on other priorities).
From here we have to wait for whatever happens first:
The bug is fixed on the dependency side
Internal OData client implementation is ready together with Batch
I hope it helps and explains current solution path. If you share a bit around your deadlines and the potential impact we'll be happy to consider that.
This has been fixed within the dependency and as of version 3.25.0 the SAP Cloud SDK includes the fix.

OpenLayers 3: "Unable to get property 'leaf' of undefined or null reference."

Using Open layers 3.7.0.
I have layer with features. I remove one, build a new similar one, add the new one and I get error message
"Unable to get property 'leaf' of undefined or null reference."
I have searched for what could cause that but search don't give any result.
Some more from the same error (I used v3.8.2 here but got exactly the same:
at rbush.prototype._chooseSubtree (http://openlayers.org/en/v3.8.2/build/ol-debug.js:70778:13)
at rbush.prototype._insert (http://openlayers.org/en/v3.8.2/build/ol-debug.js:70815:9)
at rbush.prototype.insert (http://openlayers.org/en/v3.8.2/build/ol-debug.js:70623:19)
at ol.structs.RBush.prototype.insert (http://openlayers.org/en/v3.8.2/build/ol-debug.js:71178:3)
at ol.source.Vector.prototype.addFeatureInternal (http://openlayers.org/en/v3.8.2/build/ol-debug.js:71589:7)
at ol.source.Vector.prototype.addFeature (http://openlayers.org/en/v3.8.2/build/ol-debug.js:71566:3)
Progress
Where we build the feature we have a projection.
var lineString = new ol.geom.LineString(coordinates);
lineString.transform("EPSG:4326", "EPSG:3857");
var feature = new ol.Feature(lineString);
(...)
If we comment/remove
//lineString.transform("EPSG:4326", "EPSG:3857");
Then there is no bug. By the way that's an hint, not a solution as the features are then not located where they should be.
Solution found
I do not think that this is a perfect soluton but I solved the problem regarding or application by saving the LineStrings in memory instead of re-creating it.
Then I still recreate the Feature from existing LineString.
I got the same error on console and I solved this testing if the features had a valid geometry(coordinates), so I discard the inclusion of the invalid features on the layer source.

Resources