Jest toEqual is failing and I don't know why - jestjs

I'm running the tests for a package I'd like to help contribute to. However, when running the tests, which I'd assume should work, I'm getting a myriad of failing tests with this kind of output.
● PagingType › should create the correct filter graphql schema
expect(received).toEqual(expected) // deep equality
- Expected - 20
+ Received + 20
- type Query {
+ type Query {
- test(input: Paging!): Int!
+ test(input: Paging!): Int!
- }
-
+ }
+
- input Paging {
+ input Paging {
- """Paginate before opaque cursor"""
+ """Paginate before opaque cursor"""
- before: ConnectionCursor
-
+ before: ConnectionCursor
+
- """Paginate after opaque cursor"""
+ """Paginate after opaque cursor"""
- after: ConnectionCursor
-
+ after: ConnectionCursor
+
- """Paginate first"""
+ """Paginate first"""
- first: Int
-
+ first: Int
+
- """Paginate last"""
+ """Paginate last"""
- last: Int
+ last: Int
- }
-
+ }
+
- """Cursor for paging through collections"""
+ """Cursor for paging through collections"""
- scalar ConnectionCursor
+ scalar ConnectionCursor
↵
22 | const sf = await getOrCreateSchemaFactory();
23 | const schema = await sf.create(resolvers);
> 24 | return expect(printSchema(schema)).toEqual(sdl);
| ^
25 | };
26 |
27 | export const aggregateArgsTypeSDL = readGraphql(resolve(__dirname, './aggregate-args-type.graphql'));
at Object.<anonymous>.exports.expectSDL (packages/query-graphql/__tests__/__fixtures__/index.ts:24:38)
It seems like "toEqual" just isn't working correctly, because there seems to be nothing actually wrong between the two text outputs.
This is another set of example outputs of a console.log of the strings being compared. The failures are also happening with objects too.
type Query {
updateTest(input: UpdateOne!): Int!
}
input UpdateOne {
"""The id of the record to update"""
id: ID!
"""The update to apply."""
update: FakeUpdateOneType!
}
input FakeUpdateOneType {
name: String!
}
type Query {
updateTest(input: UpdateOne!): Int!
}
input UpdateOne {
"""The id of the record to update"""
id: ID!
"""The update to apply."""
update: FakeUpdateOneType!
}
input FakeUpdateOneType {
name: String!
}
Does anyone have an idea what might be wrong/is happening?

Ok. As it seems, it's a Windows/ Git issue.
This is the text with JSON.stringify
console.log
"type Query {\n updateTest(input: UpdateOne!): Int!\n}\n\ninput UpdateOne {\n \"\"\"The id of the record to update\"\"\"\n id: ID!\n\n \"\"\"The update to apply.\"\"\"\n update: FakeUpdateOneType!\n}\n\ninput FakeUpdateOneType {\n name: String!\n}\n"
at Object.<anonymous>.exports.expectSDL (packages/query-graphql/__tests__/__fixtures__/index.ts:22:11)
console.log
"type Query {\r\n updateTest(input: UpdateOne!): Int!\r\n}\r\n\r\ninput UpdateOne {\r\n \"\"\"The id of the record to update\"\"\"\r\n id: ID!\r\n\r\n \"\"\"The update to apply.\"\"\"\r\n update: FakeUpdateOneType!\r\n}\r\n\r\ninput FakeUpdateOneType {\r\n name: String!\r\n}\r\n"
If Git for Windows isn't set up correctly, it will add /r/n as line breaks when cloning a repo. If the project was developed in anything other than Windows, then text comparisons with toEqual that have line breaks in them will fail. Too bad Jest doesn't say something like "mismatched line breaks" or show the escaped characters.
To fix this, I installed Git for Windows (I needed to upgrade anyway) with this option, which sets core.autocrlf to "input".

Related

How can I exract a full sentence using Apache NLPCraft?

In my model file I am using a macro with a regex extract any space-separated alpha-numeric words to capture an user-input sentence i.e.
macros:
- name: "<GENERIC_INPUT>"
macro: "{//[a-zA-Z0-9 ]+//}"
Then I am trying to capture it as following in the element:
elements:
- id: "prop:title"
description: Set title
synonyms:
- "{set|add} title <GENERIC_INPUT>"
The intent term is as following:
intents:
- "intent=myIntent term(createStory)~{tok_id() == 'prop:createStory'} term(title)~{tok_id() == 'prop:title'}?"
In the Java Model I am correctly capturing the title property:
public NCResult onMatch(
NCIntentMatch ctx,
#NCIntentTerm("createStory") NCToken createStory,
#NCIntentTerm("title") Optional<NCToken> titleList,
{
...
When I run a query against the REST API service the probe is deployed in, I only get the first word of the last element <GENERIC_INPUT> (the regular expression) of the synonym defined as {set|add} title <GENERIC_INPUT> i.e.
HTTP 200 [235ms]
{
"status": "API_OK",
"state": {
"resType": "json",
"mdlId": "Create Story",
"txt": "set title this is my story",
"resMeta": {},
"srvReqId": "GKDY-QLBM-B6TQ-7KYO-KMR8",
"status": "QRY_READY",
"resBody": {
"title": "set title this",
"createStory": true,
},
"usrId": 1,
"intentId": "myIntent"
}
}
In the resBody.title I get set title this rather than the whole string as it should be allowed by the regex i.e. set title this is my story
Any idea why? How can I get it to extract the whole title?
Many thanks
Regex <GENERIC_INPUT> can catch individual token, but not group of tokens.
Please try such way
elements:
- id: "prop:title"
description: "Set title"
synonyms:
- "{set|add} title"
- id: "prop:any"
description: "Set any"
synonyms:
- "//[a-zA-Z0-9 ]+//"
intents:
- "intent=test term(title)={# == 'prop:title'} term(any)={# == 'prop:any'}*"
Callback
#NCIntentRef("test")
#NCIntentSample({
"Set title 1 2",
"Set title a b c"
})
NCResult x(
NCIntentMatch ctx,
#NCIntentTerm("title") NCToken title,
#NCIntentTerm("any") List<NCToken> any) {
System.out.println("title=" + title.getNormalizedText());
System.out.println("any=" + any.stream().map(NCToken::getNormalizedText).collect(Collectors.joining("|")));
return NCResult.text("OK");
}
It should work.
But also please try to drop regex here. It can work too slow and you will have many garbage variants.
You can use one element in intent and extract following words in the callback
Model:
elements:
- id: "prop:title"
description: "Set title"
synonyms:
- "{set|add} title"
intents:
- "intent=test term(title)={# == 'prop:title'}"
Callback:
#NCIntentRef("test")
#NCIntentSample({
"Set title 1 2",
"Set title a b c"
})
NCResult x(
NCIntentMatch ctx,
#NCIntentTerm("title") NCToken title) {
System.out.println("title=" + title.getNormalizedText());
System.out.println("any after=" +
Stream.concat(
ctx.getVariant().getFreeTokens().stream(),
ctx.getVariant().getStopWordTokens().stream()
).sorted(Comparator.comparingInt(NCToken::getStartCharIndex)).
filter(p -> p.getStartCharIndex() > title.getStartCharIndex()).
map(NCToken::getNormalizedText).
collect(Collectors.joining("|"))
);
return NCResult.text("OK");
}
Same result, but without regex.
do you know if the apache nlpcraft provides a built-in method to extract as >>well quoted sentences i.e. 'some sentence like this one'?
There are few workarounds for such request, some of the seem like hacks.
I guess that most straight solution is following:
Make NCCustomParser
public class QuotedSentenceParser implements NCCustomParser {
#Override
public List<NCCustomElement> parse(NCRequest req, NCModelView mdl, List<NCCustomWord> words, List<NCCustomElement> elements) {
String txt = req.getNormalizedText();
if (
txt.charAt(0) == '\'' &&
txt.charAt(txt.length() - 1) == '\'' &&
!txt.substring(1, txt.length() - 1).contains("'")
)
return words.stream().map(
w -> new NCCustomElement() {
#Override
public String getElementId() {
return "qElem";
}
#Override
public List<NCCustomWord> getWords() {
return Collections.singletonList(w);
}
#Override
public Map<String, Object> getMetadata() {
return Collections.emptyMap();
}
}
).collect(Collectors.toList());
return null;
}
}
add configuration (Note, that you have to add qElem dummy element here.. It seems like some bug or unclear feature, I am pretty sure that dynamic definition of this element ID in QuotedSentenceParser must be enough)
elements:
- id: "qElem"
description: "Set title"
synonyms:
- "-"
intents:
- "intent=test term(qElem)={# == 'qElem'}*"
parsers:
- "org.apache.nlpcraft.examples.lightswitch.QuotedSentenceParser"
Usage
#NCIntentRef("test")
#NCIntentSample({
"'Set title a b c'"
})
NCResult x(NCIntentMatch ctx, #NCIntentTerm("qElem") List<NCToken> qElems) {
System.out.println(qElems.stream().map(p -> p.getNormalizedText()).collect(Collectors.joining("|")));
return NCResult.text("OK");
}

Thread creation on function return in Rust WASM

I am working with Polars in a wasm environment.
I have noticed an inconsistency with the LazyFrame.collect operation where it sometimes creates threads when working with certain datasets.
Here is the code that relates to the issue
#[wasm_bindgen]
pub fn start(buff: &[u8],
item_id:&str,
order_id:&str,
item_name:&str) -> JsValue{
let cursor = Cursor::new(buff);
let lf = CsvReader::new(cursor).with_ignore_parser_errors(true).finish().unwrap().lazy();
let df = lf.groupby([col(order_id)]);
let df = df.agg([col(item_id),col(item_name)]);
// Error occurs here
let df = df.collect().unwrap();
}
Working with a particular dataset provides me with the error:
panicked at 'failed to spawn thread: Error { kind: Unsupported, message: "operation not supported on this platform" }'
because it is attempting to spawn threads in a WASM environment.
However, with other datasets, this process would execute flawlessly. And it would not try to create the threads. The issue does not seem to be file size due to testing with various datasets.
I would like to know what part of the Lazyframe.collect operation creates this inconsistency and how to avoid it.
working.csv
Order ID,Product ID,Product Name
InvoiceNo0,Product ID0,Product Name0
InvoiceNo0,Product ID1,Product Name1
InvoiceNo0,Product ID2,Product Name2
InvoiceNo0,Product ID3,Product Name3
InvoiceNo0,Product ID4,Product Name4
InvoiceNo0,Product ID5,Product Name5
notworking.csv
Order ID,Product ID,Product Name
B0000001,P0001,Product - 0001
B0000001,P0002,Product - 0002
B0000001,P0003,Product - 0003
B0000001,P0004,Product - 0004
B0000001,P0005,Product - 0005
B0000002,P0006,Product - 0006
The Polars fork that allows wasm is provided by
https://github.com/universalmind303/polars/tree/wasm
You can see the full project here, as well as both CSV files:
https://github.com/KivalM/lazyframe-min-test
EDIT: Output of describe_plan()
working dataset
[col("Product ID"), col("Product Name")] BY [col("Order ID")] FROM DATAFRAME(in-memory): ["Order ID", "Product ID", "Product Name"];
project */3 columns | details: None;
selection: "None"
not working dataset
[col("Product ID"), col("Product Name")] BY [col("Order ID")] FROM DATAFRAME(in-memory): ["Order ID", "Product ID", "Product Name"];
project */3 columns | details: None;
selection: "None"
Output of schema()
working dataset
name: Order ID, data type: Utf8
name: Product ID, data type: Utf8
name: Product Name, data type: Utf8
not working dataset
name: Order ID, data type: Utf8
name: Product ID, data type: Utf8
name: Product Name, data type: Utf8
output describe_optimized_plan():
[col("Product ID"), col("Product Name")] BY [col("Order ID")] FROM DATAFRAME(in-memory): ["Product ID", "Product Name", "Order ID"];
project 3/3 columns | details: Some([col("Product ID"), col("Product Name"), col("Order ID")]);
selection: "None"
EDIT:
After a closer look at the source code. the problem doesnt seem to be directly from any polars code.
I have tracked the issue down to polars-lazy/src/physical_plan/executors/groupby.rs Function
impl Executor for GroupByExec {
fn execute
Which then returns a value from
groupby_helper(df,keys,&self.aggs,self.apply.as_ref(),state,self.maintain_order,self.slice,)
However, the groupby_helper function runs to completion, and the dataframe is successfully created. The error appears when the dataframe is being returned from groupby_helper to fn execute. It is odd that a thread is attempting to be created only when this function returns. Does there exist something in RUST WASM that could cause behaviour like this?
so it looks like there is a std::thread operation happening with the groupbys that I missed when creating the branch.
impl Drop for GroupsIdx {
fn drop(&mut self) {
let v = std::mem::take(&mut self.all);
// ~65k took approximately 1ms on local machine, so from that point we drop on other thread
// to stop query from being blocked
if v.len() > 1 << 16 {
std::thread::spawn(move || drop(v));
} else {
drop(v);
}
}
}
The dataset size is what is determining the thread spawn.
any group greater than 1 << 16 (~65k) will spawn a thread.
Feature flagging that impl to only compile on non-wasm targets should fix your issue.

Creating mapped object in TypeScript

I have an interface that looks like that:
interface Person {
name: string;
age: number;
group: Options
}
and an enum that looks like this:
export enum Options {
ONE = 'one',
TWO = 'two',
THREE = 'three',
}
I want to create a new object where the key will be one of the Options and the value will be Person.
I tried something like that:
type PersonMapped = {
[key in Options]: Person; // error on the in keyword
}
But I get an error that says: parsing error: Unexpected token, expected "]".
What's wrong with this code?
Edit 1:
Here's the code example that I try to use:
export enum Group {
ONE = 1,
TWO = 2,
THREE = 3
}
export type Mappable = {
[key in Group]: string
}
Here's the error from the ide:
Edit 2:
It seems that the problem is with my editor settings. I'm using vscode version - 1.59.1. I also have eslint extension installed with version - 2.1.23.

Expect positive number parameter passed - jest

The test is linked to this question here which I raised (& was resolved) a few days ago. My current test is:
// Helpers
function getObjectStructure(runners) {
const backStake = runners.back.stake || expect.any(Number).toBeGreaterThan(0)
const layStake = runners.lay.stake || expect.any(Number).toBeGreaterThan(0)
return {
netProfits: {
back: expect.any(Number).toBeGreaterThan(0),
lay: expect.any(Number).toBeGreaterThan(0)
},
grossProfits: {
back: (runners.back.price - 1) * backStake,
lay: layStake
},
stakes: {
back: backStake,
lay: layStake
}
}
}
// Mock
const funcB = jest.fn(pairs => {
return pairs[0]
})
// Test
test('Should call `funcB` with correct object structure', () => {
const params = JSON.parse(fs.readFileSync(paramsPath, 'utf8'))
const { arb } = params
const result = funcA(75)
expect(result).toBeInstanceOf(Object)
expect(funcB).toHaveBeenCalledWith(
Array(3910).fill(
expect.objectContaining(
getObjectStructure(arb.runners)
)
)
)
})
The object structure of arb.runners is this:
{
"back": {
"stake": 123,
"price": 1.23
},
"lay": {
"stake": 456,
"price": 4.56
}
}
There are many different tests around this function mainly dependent upon the argument that is passed into funcA. For this example, it's 75. There's a different length of array that is passed to funcB dependent upon this parameter. However, it's now also dependent on whether the runners (back and/or lay) have existing stake properties for them. I have a beforeAll in each test which manipulates the arb in the file where I hold the params. Hence, that's why the input for the runners is different every time. An outline of what I'm trying to achieve is:
Measure the array passed into funcB is of correct length
Measure the objects within the array are of the correct structure:
2.1 If there are stakes with the runners, that's fine & the test is straight forward
2.2 If not stakes are with the runners, I need to test that; netProfits, grossProfits, & stakes properties all have positive Numbers
2.2 is the one I'm struggling with. If I try with my attempt below, the test fails with the following error:
TypeError: expect.any(...).toBeGreaterThan is not a function
As with previous question, the problem is that expect.any(Number).toBeGreaterThan(0) is incorrect because expect.any(...) is not an assertion and doesn't have matcher methods. The result of expect.any(...) is just a special value that is recognized by Jest equality matchers. It cannot be used in an expression like (runners.back.price - 1) * backStake.
If the intention is to extend equality matcher with custom behaviour, this is the case for custom matcher. Since spy matchers use built-in equality matcher anyway, spy arguments need to be asserted explicitly with custom matcher.
Otherwise additional restrictions should be asserted manually. It should be:
function getObjectStructure() {
return {
netProfits: {
back: expect.any(Number),
lay: expect.any(Number)
},
grossProfits: {
back: expect.any(Number),
lay: expect.any(Number)
},
stakes: {
back: expect.any(Number),
lay: expect.any(Number)
}
}
}
and
expect(result).toBeInstanceOf(Object)
expect(funcB).toHaveBeenCalledTimes(1);
expect(funcB).toHaveBeenCalledWith(
Array(3910).fill(
expect.objectContaining(
getObjectStructure()
)
)
)
const funcBArg = funcB.mock.calls[0][0];
const nonPositiveNetProfitsBack = funcBArg
.map(({ netProfits: { back } }, i) => [i, back])
.filter(([, val] => !(val > 0))
.map(([i, val] => `${netProfits:back:${i}:${val}`);
expect(nonPositiveNetProfitsBack).toEqual([]);
const nonPositiveNetProfitsLay = ...
Where !(val > 0) is necessary to detect NaN. Without custom matcher failed assertion won't result in meaningful message but an index and nonPositiveNetProfitsBack temporary variable name can give enough feedback to spot the problem. An array can be additionally remapped to contain meaningful values like a string and occupy less space in errors.

Azure autoscale metricname values

I need to define scale rule for my virtual machine I have read the following
The MetricName and MetricNamespace are not values I just made up.
These have to be precise. You can get these values from the
MetricsClient API and there is some sample code in this link to show
how to get the values.
http://rickrainey.com/2013/12/15/auto-scaling-cloud-services-on-cpu-percentage-with-the-windows-azure-monitoring-services-management-library/
But its still not clear ho do I get a MetricName list of possible values as I didn't found any sample code for it
Here is the code I used to get the available MetricNames for the cloud service. It was part of a unit test project, hence the [TestMethod] attribute.
[TestMethod]
public async Task GetMetricDefinitions()
{
// Build the resource ID string.
string resourceId = ResourceIdBuilder.BuildCloudServiceResourceId(
cloudServiceName, deploymentName, roleName );
Console.WriteLine("Resource Id: {0}", resourceId);
//Get the metric definitions.
var retrieveMetricsTask =
metricsClient.MetricDefinitions.ListAsync(resourceId, null, null, CancellationToken.None);
var metricListResponse = await retrieveMetricsTask;
MetricDefinitionCollection metricDefinitions = metricListResponse.MetricDefinitionCollection;
// Make sure something was returned.
Assert.IsTrue(metricDefinitions.Value.Count > 0);
// Display the metric definitions.
int count = 0;
foreach (MetricDefinition metricDefinition in metricDefinitions.Value)
{
Console.WriteLine("MetricDefinitio: " + count++);
Console.WriteLine("Display Name: " + metricDefinition.DisplayName);
Console.WriteLine("Metric Name: " + metricDefinition.Name);
Console.WriteLine("Metric Namespace: " + metricDefinition.Namespace);
Console.WriteLine("Is Altertable: " + metricDefinition.IsAlertable);
Console.WriteLine("Min. Altertable Time Window: " + metricDefinition.MinimumAlertableTimeWindow);
Console.WriteLine();
}
}
Here is the output of the test for my cloud service:

Resources