Escaping a keyword within a macro from graphql-client - rust

I am trying to use the graphql-client crate to make requests on a graphql schema that looks similar to this
enum AttributeType {
// ...
}
type Attribute {
name: String!
type: AttributeType!
}
Using this
#[derive(GraphQLQuery)]
#[graphql(
schema_path = "src/graphql/schema.graphql",
query_path = "src/graphql/create_something.graphql"
)]
pub struct MutateSomethingModule;
When I try to use the graphql-client I get an error:
error: expected identifier, found keyword `type`
--> src/x/mod.rs:14:10
|
14 | #[derive(GraphQLQuery)]
| ^^^^^^^^^^^^ expected identifier, found keyword
help: you can escape reserved keywords to use them as identifiers
|
14 | #[derive(r#type)]
| ^^^^^^
error: proc-macro derive produced unparseable tokens
--> src/x/mod.rs:14:10
|
14 | #[derive(GraphQLQuery)]
| ^^^^^^^^^^^^
I am guessing this error message is complaining that I have the word type as an name is my schema and that I should escape it somehow. Based on the error message I tried replacing type: with r#type:, r#"type"# and some other similar variations.
What is the correct way to do this?

Based on the code, keywords have an underscore appended to them:
// List of keywords based on https://doc.rust-lang.org/grammar.html#keywords
let reserved = &[
"abstract", "alignof", "as", "become", "box", "break", "const", "continue", "crate", "do",
"else", "enum", "extern", "false", "final", "fn", "for", "if", "impl", "in", "let", "loop",
"macro", "match", "mod", "move", "mut", "offsetof", "override", "priv", "proc", "pub",
"pure", "ref", "return", "Self", "self", "sizeof", "static", "struct", "super", "trait",
"true", "type", "typeof", "unsafe", "unsized", "use", "virtual", "where", "while", "yield",
];
if reserved.contains(&field_name) {
let name_ident = Ident::new(&format!("{}_", field_name), Span::call_site());
return quote! {
#description
#deprecation
#[serde(rename = #field_name)]
pub #name_ident: #field_type
};
}
This means that type should be accessible as type_.
See also:
Handle all Rust keywords as field names in codegen (#94)
Handle all keywords as field names in codegen (#96)

Related

How do you configure a mados scCall step for VarArgs MultiArg endpoint argument with a struct as argument?

I'm trying to create an elrond smart contract that would allow multiple elements to be sent at once to reduce the number of transactions to send the initial information to the contract.
To do so, I'm using an endpoint that takes into an argument a VarArgs of MultiArg3
#[allow(clippy::too_many_arguments)]
#[only_owner]
#[endpoint(createMultipleNft)]
fn create_multipl_nft(
&self,
#[var_args] args: VarArgs<MultiArg3<ManagedBuffer, ManagedBuffer, AttributesStruct<Self::Api>>>,
) ->SCResult<u64> {
...
Ok(0u64)
}
And here is my AttributesStruct
#[derive(TypeAbi, NestedEncode, NestedDecode, TopEncode, TopDecode)]
pub struct AttributesStruct<M: ManagedTypeApi> {
pub value1: ManagedBuffer<M>,
pub value2: ManagedBuffer<M>,
}
And here is my Mandos step (the rest of the steps works fine, they were all working with my previous implementation for a single element endpoint).
{
"step": "scCall",
"txId": "create-multiple-NFT-1",
"tx": {
"from": "address:owner",
"to": "sc:minter",
"function": "createMultipleNft",
"arguments": [
["str:NFT 1"],
["str:www.mycoolnft.com/nft1.jpg"],
[
["str:test1", "str:test2"]
]
],
"gasLimit": "20,000,000",
"gasPrice": "0"
},
"expect": {
"out": [
"1", "1", "1"
],
"status": "0",
"message": "",
"gas": "*",
"refund": "*"
}
}
I have also try this for the arguments :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
["str:test1", "str:test2"]
]
And this :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
"str:test1", "str:test2"
]
And this :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
{
"0-value1":"str:test1",
"1-value2":"str:test2"
}
]
Here is the error message :
FAIL: result code mismatch. Tx create-multiple-NFT-1. Want: 0. Have: 4 (user error). Message: argument decode error (args): input too short
At the same time, I'm having some problems with the argument input of the struct with the ManagedBuffer. Am I doing something wrong with that? I'm trying to have a struct of argument for an NFT that contains multiple entries of strings that I can send as an argument to the smart contract.
Since you are using a struct the ManagedBuffer inside the struc are nested encoded. Which means you need to add the length of the ManagedBuffer before it.
Luckily there is a shortcut for that by using the nested: prefix.
So your arguments would look like this:
"arguments": [
["str:NFT 1"],
["str:www.mycoolnft.com/nft1.jpg"],
[
["nested:str:test1", "nested:str:test2"]
]
]

Replacing a value for a given key in Kusto

I am trying to use the .set-or-replace command to amend the "subject" entry below from sample/consumption/backups to sample/consumption/backup but I am not having much look in the world of Kusto.
I can't seem to reference the sub headings within Records, data.
"source_": CustomEventRawRecords,
"Records": [
{
"metadataVersion": "1",
"dataVersion": "",
"eventType": "consumptionRecorded",
"eventTime": "1970-01-01T00:00:00.0000000Z",
"subject": "sample/consumption/backups",
"topic": "/subscriptions/1234567890id/resourceGroups/rg/providers/Microsoft.EventGrid/topics/webhook",
"data": {
"resourceId": "/subscriptions/1234567890id/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm"
},
"id": "1234567890id"
}
],
Command I've tried to get to work;
.set-or-replace [async] CustomEventRawRecords [with (subject = sample/consumption/backup [, ...])] <| QueryOrCommand
If you're already manipulating the data, why not turn it into a columnar representation? that way you can easily make the corrections you want to make and also get the full richness of the tabular operators plus an intellisense experience that will help you formulate queries easily
here's an example query that will do that:
execute query in browser
datatable (x: dynamic)[dynamic({"source_": "CustomEventRawRecords",
"Records": [
{
"metadataVersion": "1",
"dataVersion": "",
"eventType": "consumptionRecorded",
"eventTime": "1970-01-01T00:00:00.0000000Z",
"subject": "sample/consumption/backups",
"topic": "/subscriptions/1234567890id/resourceGroups/rg/providers/Microsoft.EventGrid/topics/webhook",
"data": {
"resourceId": "/subscriptions/1234567890id/resourceGroups/RG/providers/Microsoft.Compute/virtualMachines/vm"
},
"id": "1234567890id"
}
]})]
| extend records = x.Records
| mv-expand record=records
| extend subject = tostring(record.subject)
| extend subject = iff(subject == "sample/consumption/backups", "sample/consumption/backup", subject)
| extend metadataVersion = tostring(record.metadataVersion)
| extend dataVersion = tostring(record.dataVersion)
| extend eventType = tostring(record.eventType)
| extend topic= tostring(record.topic)
| extend data = record.data
| extend id = tostring(record.id)
| project-away x, records, record

Inject matchesJsonPath from Groovy into Spring Cloud Contract

When writing a Spring Cloud Contract in Groovy,
I want to specify an explicit JSON path expression.
The expression:
"$.['variants'][*][?(#.['name'] == 'product_0004' && #.['selected'] == true)]"
shall appear in the generated json, like so:
{
"request" : {
"bodyPatterns": [ {
"matchesJsonPath": "$.['variants'][*][?(#.['name'] == 'product_0004' && #.['selected'] == true)]"
} ]
}
}
in order to match e.g.:
{ "variants": [
{ "name": "product_0003", "selected": false },
{ "name": "product_0004", "selected": true },
{ "name": "product_0005", "selected": false } ]
}
and to not match e.g.:
{ "variants": [
{ "name": "product_0003", "selected": false },
{ "name": "product_0004", "selected": false },
{ "name": "product_0005", "selected": true } ]
}
Is this possible using consumers, bodyMatchers, or some other facility of the Groovy DSL?
There are some possibilities with matching on json path, but you wouldn't necessarily use it for matching on explicit values, but rather to make a flexible stub for the consumer by using regex. There are some possibilities though.
So the body section is your static request body with hardcoded values, while the bodyMatchers section provides you the ability to make the stub matching from the consumer side more flexible.
Contract.make {
request {
method 'POST'
url '/some-url'
body ([
id: id
items: [
foo: foo
bar: bar
],
[
foo: foo
bar: foo
]
])
bodyMatchers {
jsonPath('$.id', byEquality()) //1
jsonPath('$.items[*].foo', byRegex('(?:^|\\W)foo(?:$|\\W)')) //2
jsonPath('$.items[*].bar', byRegex(nonBlank())) //3
}
headers {
contentType(applicationJson())
}
}
response {
status 200
}
}
I referenced some lines
1: "byEquality()" in the bodyMatchers section means: the input from the consumer must be equal to the value provided in the body for this contract/stub to match, in other words must be "id".
2: I'm not sure how nicely the //1 solution will work when the property is in a list, and you want the stub to be flexible with the amount of items provided. Therefor I also included this byRegex which basically means, for any item in the list, the property foo must have exactly value "foo". However, I dont really know why you would want to do this.
3: This is where bodyMatchers are actually most useful. This line means: match to this contract if every property bar in the list of items is a non blank string. This allows you to have a dynamic stub with a flexible size of lists/arrays.
All the conditions in bodyMatchers need to be met for the stub to match.

Getting null while trying to traverse to depth 2 of graphql type using neo4j pattern comprehension in nodejs environment

I am using neo4j(3.1), GraphQL and Nodejs. For now, I have 3 graphql types namely Country, State and City and these types have following relation in Neo4j dB.
(City)-[:PRESENT_IN]->(State)-[:PRESENT_IN]->(Country)
My graphql Type is like below:
Type Country{
name: String,
states: [State]
}
Type State{
name: String,
cities: [City]
}
Type City{
name: String,
id: Int
}
And my Query schema is:
type Query{
countries(limit: Int): [Country]
states(limit: Int): [State]
}
So , when I ran the query "countries" in GraphiQl, I was hoping to get the cities too available in a particular state present in one country i.e. traversing to depth 2 , but I am getting null when it comes to array of "cities" field,
Although the data comes for field "states".
Here is the query execution done in GraphiQL:
countries(limit:1) {
name
states{
name
cities{
name
id
}
}
}
and Here is the execution result:
"countries": {
"name": "Country 1",
"states": [
{
"name": "State A",
"cities": null ---> Here it is coming null
},
]
}
What I wanted to return was:
"countries": {
"name": "Country 1",
"states": [
{
"name": "State A",
"cities": [
{
name: City 1
id: 1
},
]
},
]
}
For my cypher query, I have used pattern comprehension as in this link: https://neo4j.com/blog/cypher-graphql-neo4j-3-1-preview/
My cypher query is:
// For Query "countries"
MATCH (c:Country)
RETURN c{
.name,
states: [(s:State)-[:PRESENT_IN]->(c) | s{.*}]
} LIMIT $limit
//For Query "states"
MATCH (s:State)
RETURN s{
.name,
cities: [(ct:City)-[:PRESENT_IN]->(s) | ct{.*}]
} LIMIT $limit
So, could anyone tell me what I am doing wrong here ? I looked into the neo4j docs and other blogs, but I am unable to figure it out. Any help or insight would be really helpful !
AFAIK, you have to query for those cities in the countries query. Something like this:
MATCH (c:Country)
RETURN c {
.name,
states: [(s:State)-[:PRESENT_IN]->(c) | s {
.*,
cities: [(ct:City)-[:PRESENT_IN]->(s) | ct {.*}]
}]
} LIMIT $limit

Mongoose query altering object order

I have a query that is generated in my Node backend - if I log it out and run it in Mongo shell then all is fine, however, if I use mongoose to do Model.find(query), then some strange property re-ordering takes place and the query breaks.
The query in question is:
{
"attributes": {
"$all": [
{
"attribute": "an id",
"value": "a value",
"deletedOn": null
},
{
"attribute": "an id again",
"value": "a value",
"deletedOn": null
}
]
}
}
However, the output from mongoose debug is:
users.find({
attributes: {
'$all': [
{
deletedOn: null,
attribute: 'an id',
value: 'a value'
},
{
deletedOn: null,
attribute: 'an id again',
value: 'a value'
}
]
}
},
{ fields: {} }
)
The only change is the shifting of the deletedOn field from last position to first position in the object. This means the query returns no results.
Are there any solutions to this issue?
Object properties in JavaScript are not ordered. You cannot ensure the order of properties on a JavaScript object and different implementations may order them differently. See this answer on a related question for some other info.
The essential key is that from the spec (ECMAScript) we get: "An object is a member of the type Object. It is an unordered collection of properties each of which contains a primitive value, object, or function. A function stored in a property of an object is called a method."
There is no "solution", because this is expected behavior. So the real question is, why does the order matter to you? What are you trying to do?
Adding on the previous answer, if order is important to you, you should use array instead of objects.
for example:
"$all": [
[
{"attribute": "an id"},
{"value": "a value"},
{"deletedOn": null},
],
...etc.
]

Resources