Cannot use "==" and "contains" in same line of scenario using conditional logic in Karate - dsl

This is a follow up of a question noted here
Lets says our implemented server v1 and v2 response looks as follows
v1Response = { id: "1", name: "awesome" }
v2Response = { id: "2", name: "awesome", value: "karate" }
Similarly we define the client schema for v1 and v2 like as follows
v1Schema = { id: "#string", name: "#string }
v2Schema = { id: "#string", name: "#string, value: "#string" }
We implement schema validation in our generic scenario as follows. We can easily set the "response" with either v1Response/v2Response AND "schema" with either v1Schema/v2Schema depending on our the environment.
* match response == schema
Above generic script works perfectly fine as long as we are testing v1 server against v1 client / v2 server against v2 client. However we cannot re-use the same scenario when we want to test backward compatibility for example
v2 server against v1 client. In this case
* match response (actually v2Response) == schema (actually v1Schema) <--- will fail
So in order to make it work and do backward compatibility testing, I also wanted to use karate "contains" feature like
* match response (actually v2Response) contains schema (actually v1Schema) <--- will pass
However in the quest to keep my scenarios generic it is currently not possible to do either
Use both ==/contains in the same line of script like as follows
serverVersion == clientVersion ? (match response == schema) : (match response contains schema)
OR
Using some flag as follows
match response SOMEFLAG schema
whereas SOMEFLAG can be set to either "==" or "contains" in karate-config.js depending on environment we are testing.
EDIT
From the above example, all I want is to test following cases that should pass
* match v1Response == v1Schema
* match v2Response == v2Schema
* match v2Response contains v1Schema
using a generic line as following
* match response == schema <--- can it possibly be solved using above suggested solutions ?

For some reason you feel that hacking the match clause is the only way to solve this problem. Please keep an open mind and, here you go:
* def schemas =
"""
{
v1: { id: "#string", name: "#string" },
v2: { id: "#string", name: "#string", value: "#string" }
}
"""
* def env = 'v1'
* def response = { id: "1", name: "awesome" }
* match response == schemas[env]
* def env = 'v2'
* def response = { id: "2", name: "awesome", value: "karate" }
* match response == schemas[env]
* def response = { id: "1", name: "awesome" }
* match response == karate.filterKeys(schemas[env], response)
The last line is as generic as you can get.

Related

dynamodb query: ValidationException: The number of conditions on the keys is invalid

I have the following schema where I am basically just trying to have a table with id as primary key, and both code and secondCode to be global secondary indexes to use to query the table.
resource "aws_dynamodb_table" "myDb" {
name = "myTable"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
attribute {
name = "code"
type = "S"
}
attribute {
name = "secondCode"
type = "S"
}
global_secondary_index {
name = "code-index"
hash_key = "code"
projection_type = "ALL"
}
global_secondary_index {
name = "second_code-index"
hash_key = "secondCode"
projection_type = "ALL"
}
}
When I try to look for one item by code
const toGet = Object.assign(new Item(), {
code: 'code_456',
});
item = await dataMapper.get<Item>(toGet);
locally I get
ValidationException: The number of conditions on the keys is invalid
and on the deployed instance of the DB I get
The provided key element does not match the schema
I can see from the logs that the key is not being populated
Serverless: [AWS dynamodb 400 0.082s 0 retries] getItem({ TableName: 'myTable', Key: {} })
Here is the class configuration for Item
#table(getEnv('MY_TABLE'))
export class Item {
#hashKey({ type: 'String' })
id: string;
#attribute({
indexKeyConfigurations: { 'code-index': 'HASH' },
type: 'String',
})
code: string;
#attribute({
indexKeyConfigurations: { 'second_code-index': 'HASH' },
type: 'String',
})
secondCode: string;
#attribute({ memberType: embed(NestedItem) })
nestedItems?: Array<NestedItem>;
}
class NestedItem {
#attribute()
name: string;
#attribute()
price: number;
}
I am using https://github.com/awslabs/dynamodb-data-mapper-js
I looked at the repo you linked for the package, I think you need to use the .query(...) method with the indexName parameter to tell DynamoDB you want to use that secondary index. Usuallly in DynamoDB, get operations use the default keys (in your case, you'd use get for queries on id, and query for queries on indices).
Checking the docs, it's not very clear - if you look at the GetItem reference, you'll see there's nowhere to supply an index name to actually use the index, whereas the Query operation allows you to supply one. As for why you need to query this way, you can read this: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html
The issue you are facing is due to calling a GetItem on an index, which is not possible. A GetItem must target a single item and an index can contain multiple items with the same key (unlike the base table), for this reason you can only use multi-item APIs on an index which are Query and Scan.

Karate - Ability to dynamically decide the type of match in karate for verification

Lets say we scripted the scenarios following way for our evolving servers
Actual server v1 response
response = { id: "1", name: "karate" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response == schema
Actual server v2 response
response = { id: "2", name: "karate", value: "is" }
Mocking client v2 schema
schema = { id: "#string", name: "#string, value: "#string" }
* match response == schema
Actual server v3 response
response = { id: "3", name: "karate", value: "is", description: "easy" }
Mocking client v3 schema
schema = { id: "#string", name: "#string", value: "#string", description: "#string" }
* match response == schema
Similarly for backward compatibility testing of our evolving servers, we script the scenarios following way
Actual server v3 response
response = { id: "3", name: "karate", value: "is", description: "easy" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Actual server v2 response
response = { id: "2", name: "karate", value: "is" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Actual server v1 response
response = { id: "1", name: "karate" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Proposal is to be able to use some kind of flag in match statement that dynamically decides the kind of match we do during testing.
Lets say that the name of flag is SOMEFLAG and we provide the kind of match we want to do during testing (set in karate-config.js for global effect)
var SOMEFLAG = "contains";
OR
var SOMEFLAG = "==";
Now in scenario we do following
# Depending on what is set in karate-config.js, it will use either contains or == for verification.
* match response SOMEFLAG schema
Is it possible to do this in karate ?
Also note that success of this idea really depends on https://github.com/intuit/karate/issues/826 due to ability match nested object using contains match.
Personally, I am strongly against this idea because it will make your tests less readable. It is a slippery slope once you start this. For an example of what happens when you attempt too much re-use (yes, re-use can be a bad thing in test automation, and I really don't care if you disagree :) - see this: https://stackoverflow.com/a/54126724/143475
What I would do is something like this:
* def lookup =
"""
{
dev: { id: "#string", name: "#string },
stage: { id: "#string", name: "#string, value: "#string" },
preprod: { id: "#string", name: "#string", value: "#string", description: "#string" }
}
"""
* def expected = lookup[karate.env]
* match response == expected
EDIT - I have a feeling that the change we made after this discussion will also solve your problem - or at least give you some new ideas: https://github.com/intuit/karate/issues/810

karate.filterKeys() API for complex JSON object

This question came to my mind after asking another question here
Lets say my response is a complex array of JSON object and currently I test like this for complex object.
* def response = [{id: 1}, {id: 2}, {id: 3}.....]
* def schema = { id: "#number" }
* match response == '#[] schema'
I want to replace above match statement with the use of filerKeys() API possibly like as follows
* match response == karate.filterKeys([]schema, response)
Basically first parameter of karate.filterKeys() API should dynamically accept every JSON object from response array and filter against the second parameter response for a successful match.
I think you are trying to dynamically alter your schema for each JSON value in a JSON array.
you can create an equivalent JSON array schema and do a match ==
* def response = [{id: 1},{id: 2}, {id: 3}]
* def schema = { id: '#number' }
* def fun = function(x){ return karate.filterKeys(schema,x) }
* match response == karate.map(response, fun)

How do you exclude a top level EmbeddedEntity property index in Google Datastore with NodeJS?

I need to exclude a top level property from being indexed by Datastore (payload in the example below). The value of payload can really vary and the keys will easily have over 1500 bytes which Datastore limits in EmbeddedEntitites.
payload does not seem to be excluded from being indexed. Datastore throws the error that content is longer than 1500 bytes.
How do I exclude payload from being indexed? Thanks.
const transformedEvent = {
id: "someString",
name: "Some Name",
payload: {
content: "a very long string",
foo: "bar"
}
};
const entity = {
key: datastore.key('Event'),
excludeFromIndexes: ['payload'],
data: transformedEvent
};
await datastore.save(entity);
In your example, content and foo would also need to be added to the excludeFromIndexes array in order to be excluded. There is currently an open issue regarding this on GitHub.
Example:
const transformedEvent = {
id: "someString",
name: "Some Name",
payload: {
content: "a very long string",
foo: "bar"
}
};
const entity = {
key: datastore.key('Event'),
excludeFromIndexes: ['payload', 'payload.content', 'payload.foo'],
data: transformedEvent
};

Data validation in AVRO

I am new to AVRO and please excuse me if it is a simple question.
I have a use case where I am using AVRO schema for record calls.
Let's say I have avro schema
{
"name": "abc",
"namepsace": "xyz",
"type": "record",
"fields": [
{"name": "CustId", "type":"string"},
{"name": "SessionId", "type":"string"},
]
}
Now if the input is like
{
"CustId" : "abc1234"
"sessionID" : "000-0000-00000"
}
I want to use some regex validations for these fields and I want take this input only if it comes in particular format shown as above. Is there any way to specify in avro schema to include regex expression?
Any other data serialization formats which supports something like this?
You should be able to use a custom logical type for this. You would then include the regular expressions directly in the schema.
For example, here's how you would implement one in JavaScript:
var avro = require('avsc'),
util = require('util');
/**
* Sample logical type that validates strings using a regular expression.
*
*/
function ValidatedString(attrs, opts) {
avro.types.LogicalType.call(this, attrs, opts);
this._pattern = new RegExp(attrs.pattern);
}
util.inherits(ValidatedString, avro.types.LogicalType);
ValidatedString.prototype._fromValue = function (val) {
if (!this._pattern.test(val)) {
throw new Error('invalid string: ' + val);
}
return val;
};
ValidatedString.prototype._toValue = ValidatedString.prototype._fromValue;
And how you would use it:
var type = avro.parse({
name: 'Example',
type: 'record',
fields: [
{
name: 'custId',
type: 'string' // Normal (free-form) string.
},
{
name: 'sessionId',
type: {
type: 'string',
logicalType: 'validated-string',
pattern: '^\\d{3}-\\d{4}-\\d{5}$' // Validation pattern.
}
},
]
}, {logicalTypes: {'validated-string': ValidatedString}});
type.isValid({custId: 'abc', sessionId: '123-1234-12345'}); // true
type.isValid({custId: 'abc', sessionId: 'foobar'}); // false
You can read more about implementing and using logical types here.
Edit: For the Java implementation, I believe you will want to look at the following classes:
LogicalType, the base you'll need to extend.
Conversion, to perform the conversion (or validation in your case) of the data.
LogicalTypes and Conversions, a few examples of existing implementations.
TestGenericLogicalTypes, relevant tests which could provide a helpful starting point.

Resources