Karate - Ability to dynamically decide the type of match in karate for verification - dsl

Lets say we scripted the scenarios following way for our evolving servers
Actual server v1 response
response = { id: "1", name: "karate" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response == schema
Actual server v2 response
response = { id: "2", name: "karate", value: "is" }
Mocking client v2 schema
schema = { id: "#string", name: "#string, value: "#string" }
* match response == schema
Actual server v3 response
response = { id: "3", name: "karate", value: "is", description: "easy" }
Mocking client v3 schema
schema = { id: "#string", name: "#string", value: "#string", description: "#string" }
* match response == schema
Similarly for backward compatibility testing of our evolving servers, we script the scenarios following way
Actual server v3 response
response = { id: "3", name: "karate", value: "is", description: "easy" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Actual server v2 response
response = { id: "2", name: "karate", value: "is" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Actual server v1 response
response = { id: "1", name: "karate" }
Mocking client v1 schema
schema = { id: "#string", name: "#string }
* match response contains schema
Proposal is to be able to use some kind of flag in match statement that dynamically decides the kind of match we do during testing.
Lets say that the name of flag is SOMEFLAG and we provide the kind of match we want to do during testing (set in karate-config.js for global effect)
var SOMEFLAG = "contains";
OR
var SOMEFLAG = "==";
Now in scenario we do following
# Depending on what is set in karate-config.js, it will use either contains or == for verification.
* match response SOMEFLAG schema
Is it possible to do this in karate ?
Also note that success of this idea really depends on https://github.com/intuit/karate/issues/826 due to ability match nested object using contains match.

Personally, I am strongly against this idea because it will make your tests less readable. It is a slippery slope once you start this. For an example of what happens when you attempt too much re-use (yes, re-use can be a bad thing in test automation, and I really don't care if you disagree :) - see this: https://stackoverflow.com/a/54126724/143475
What I would do is something like this:
* def lookup =
"""
{
dev: { id: "#string", name: "#string },
stage: { id: "#string", name: "#string, value: "#string" },
preprod: { id: "#string", name: "#string", value: "#string", description: "#string" }
}
"""
* def expected = lookup[karate.env]
* match response == expected
EDIT - I have a feeling that the change we made after this discussion will also solve your problem - or at least give you some new ideas: https://github.com/intuit/karate/issues/810

Related

SNS SDK for NodeJS won't create FIFO topic

When I create a topic using the sns.createTopic (like the code below) it won't accept the booleans and say 'InvalidParameterType: Expected params.Attributes['FifoTopic'] to be a string', even though the docs say to provide boolean value, and when I provide it with a string of 'true' it still doesn't set the topic type to be FIFO, anyone knows why?
Here's the code:
const TOPIC = {
Name: 'test.fifo',
Attributes: {
FifoTopic: true,
ContentBasedDeduplication: true
},
Tags: [{
Key: 'test-key',
Value: 'test-value'
}]
};
sns.createTopic(TOPIC).promise().then(console.log);
Used aws-sdk V2
I sent FifoTopic and ContentBasedDeduplication as strings.
The below code works fine for me
const TOPIC = {
Name: 'test.fifo',
Attributes: {
FifoTopic: "true",
ContentBasedDeduplication: "true"
},
Tags: [{
Key: 'test-key',
Value: 'test-value'
}]
};
let sns = new AWS.SNS();
let response3 =await sns.createTopic(TOPIC).promise();
console.log(response3);
Note: Make sure your lambda has correct permissions.
You will be getting attributes like FifoTopic and ContentBasedDeduplication when performing the getTopicAttributes.
let respo = await sns.getTopicAttributes({
TopicArn:"arn:aws:sns:us-east-1:XXXXXXX:test.fifo"}
).promise();
please find the screenshot

GraphQL Resolver Problems

I've spent quite a bit of time reading through the GraphQL tutorials but unfortunately they don't seem to cover things in quite enough depth for me to get my head around. I'd really appreciate some help with this real world example.
In the examples the queries are placed at the root of the resolver object; I can get this to work fine for single level queries. When I attempt to resolve a nested query however the nested resolver never gets called. What I'm massively confused by is every tutorial I find that isn't issued on the graphql website put in a Query object and nest their queries underneeth that, not root level.
Consider the following Schema:
type Product {
id: String!
retailerId: String!
title: String!
description: String
price: String!
currency: String!
}
type OrderLine {
product: Product!
quantity: Int!
}
type Order {
id: String!
retailerId: String!
orderDate: Date!
orderLines: [OrderLine!]!
}
type Query {
product(id: String!): Product
order(id: String!): Order
}
schema {
query: Query
}
And the following query:
query {
order(id: "1") {
id
orderLines {
quantity
}
}
}
I have tried multiple versions of implementing the resolvers (just test data for now) and none seem to return what I exect. This is my current resolver implementation:
const resolvers = {
OrderLine: {
quantity: () => 1,
},
Order: {
orderLines: (parent: any, args: any) => { console.log("Calling order lines"); return []; },
},
Query: {
product(parent, args, ctx, other) {
return { id: args.id.toString(), test: true };
},
order: ({ id }) => { console.log("Calling order 1"); return { id: id.toString(), testOrder: true, orderLines: [] }; },
},
order: ({ id }) => { console.log("Calling order 2"); return { id: id.toString(), testOrder: true, orderLines: [] }; },
};
In the console I can oberse the "Calling order 2" log message, there are no logs to "Calling order lines" and the order lines array is empty.
So two part question:
1) Why does it hit "Calling order 2" and not "Calling order 1" in the above example?
2) Why won't the above work for the nested query Order.OrderLines?
Thanks in advance!
In query
type Query {
product(id: String!): Product
order(id: String!): Order
users: User
}
schema {
query: Query
}
In resolvers
const resolvers = {
order: ({ id }) => function
product: ({ id }) => function
}
Graphql work on query resolver concept. If you want to any query(example users) you must have
resolver(ie users) which return User having definition in type User.
Graphql query is interactive and case sensitive
The next step is to implement the resolver function for the order/product query.
In fact, one thing we haven’t mentioned yet is that not only root fields,
but virtually all fields on the types in a GraphQL schema have resolver functions.
1) Why does it hit "Calling order 2" and not "Calling order 1" in the above example?
In this Query
query {
order(id: "1") {
id
orderLines {
quantity
}
}
}
then it go to order which return Order with define type
2) Why won't the above work for the nested query Order.OrderLines?
You can only use two query first order and second product only as per your schema
Please check doc for nested query for this requirement.
If you use buildSchema to generate your schema, the only way to provide resolvers for your fields is through the root object. But this is more of a hack -- you're not actually overriding the default resolvers for the fields and as such, you're basically limited to just working with the root-level fields (as you are learning the hard way). This is why only the Query.order function is called -- this is a root-level field. Why passing functions through the root (kind of) works is explained in detail here.
The bottom line is you shouldn't be using buildSchema. If you want to use SDL to define your schema, migrate to using Apollo Server.

Cannot use "==" and "contains" in same line of scenario using conditional logic in Karate

This is a follow up of a question noted here
Lets says our implemented server v1 and v2 response looks as follows
v1Response = { id: "1", name: "awesome" }
v2Response = { id: "2", name: "awesome", value: "karate" }
Similarly we define the client schema for v1 and v2 like as follows
v1Schema = { id: "#string", name: "#string }
v2Schema = { id: "#string", name: "#string, value: "#string" }
We implement schema validation in our generic scenario as follows. We can easily set the "response" with either v1Response/v2Response AND "schema" with either v1Schema/v2Schema depending on our the environment.
* match response == schema
Above generic script works perfectly fine as long as we are testing v1 server against v1 client / v2 server against v2 client. However we cannot re-use the same scenario when we want to test backward compatibility for example
v2 server against v1 client. In this case
* match response (actually v2Response) == schema (actually v1Schema) <--- will fail
So in order to make it work and do backward compatibility testing, I also wanted to use karate "contains" feature like
* match response (actually v2Response) contains schema (actually v1Schema) <--- will pass
However in the quest to keep my scenarios generic it is currently not possible to do either
Use both ==/contains in the same line of script like as follows
serverVersion == clientVersion ? (match response == schema) : (match response contains schema)
OR
Using some flag as follows
match response SOMEFLAG schema
whereas SOMEFLAG can be set to either "==" or "contains" in karate-config.js depending on environment we are testing.
EDIT
From the above example, all I want is to test following cases that should pass
* match v1Response == v1Schema
* match v2Response == v2Schema
* match v2Response contains v1Schema
using a generic line as following
* match response == schema <--- can it possibly be solved using above suggested solutions ?
For some reason you feel that hacking the match clause is the only way to solve this problem. Please keep an open mind and, here you go:
* def schemas =
"""
{
v1: { id: "#string", name: "#string" },
v2: { id: "#string", name: "#string", value: "#string" }
}
"""
* def env = 'v1'
* def response = { id: "1", name: "awesome" }
* match response == schemas[env]
* def env = 'v2'
* def response = { id: "2", name: "awesome", value: "karate" }
* match response == schemas[env]
* def response = { id: "1", name: "awesome" }
* match response == karate.filterKeys(schemas[env], response)
The last line is as generic as you can get.

Mutation with list of strings Variable "$_v0_data" got invalid value Graphql Node.js

I have this simple mutation that works fine
type Mutation {
addJob(
url: String!
description: String!
position: String!
company: String!
date: DateTime!
tags: [String!]!
): Job
}
Mutation Resolver
function addJob(parent, args, context, info) {
console.log('Tags => ', args.tags)
// const userId = getUserId(context)
return context.db.mutation.createJob(
{
data: {
position: args.position,
componay: args.company,
date: args.date,
url: args.url,
description: args.description,
tags: args.tags
}
},
info
)
}
however, once I tried to put an array of strings(tags) as you see above I I can't get it to work and I got this error
Error: Variable "$_v0_data" got invalid value { ... , tags: ["devops", "aws"] }; Field "0" is not defined by type JobCreatetagsInput at value.tags.
If I assigned an empty array to tags in the mutation there is no problem, however if I put a single string value ["DevOps"] for example i get the error
The issue was in the resolver function apparently as I found here I have to assign the tags list within an object in the set argument like the code below
function addJob(parent, args, context, info) {
return context.db.mutation.createJob(
{
data: {
position: args.position,
componay: args.company,
date: args.date,
url: args.url,
description: args.description,
tags: { set: args.tags }
}
},
info
)
}

How do you exclude a top level EmbeddedEntity property index in Google Datastore with NodeJS?

I need to exclude a top level property from being indexed by Datastore (payload in the example below). The value of payload can really vary and the keys will easily have over 1500 bytes which Datastore limits in EmbeddedEntitites.
payload does not seem to be excluded from being indexed. Datastore throws the error that content is longer than 1500 bytes.
How do I exclude payload from being indexed? Thanks.
const transformedEvent = {
id: "someString",
name: "Some Name",
payload: {
content: "a very long string",
foo: "bar"
}
};
const entity = {
key: datastore.key('Event'),
excludeFromIndexes: ['payload'],
data: transformedEvent
};
await datastore.save(entity);
In your example, content and foo would also need to be added to the excludeFromIndexes array in order to be excluded. There is currently an open issue regarding this on GitHub.
Example:
const transformedEvent = {
id: "someString",
name: "Some Name",
payload: {
content: "a very long string",
foo: "bar"
}
};
const entity = {
key: datastore.key('Event'),
excludeFromIndexes: ['payload', 'payload.content', 'payload.foo'],
data: transformedEvent
};

Resources