I have an analytics tracker that will only call after 1 second and with an object where the intervalInMilliseconds (duration) value is not deterministic.
How can I use jest.toHaveBeenCalledWith to test the object?
test('pageStats - publicationPage (will wait 1000ms)', done => {
const track = jest.fn()
const expected = new PayloadTiming({
category: 'PublicationPage',
action: 'PublicationPage',
name: 'n/a',
label: '7',
intervalInMilliseconds: 1000 // or around
})
mockInstance.viewState.layoutMode = PSPDFKit.LayoutMode.SINGLE
const sendPageStats = pageStats({
instance: mockInstance,
track,
remoteId: nappConfig.remoteId
})
mockInstance.addEventListener('viewState.currentPageIndex.change', sendPageStats)
setTimeout(() => {
mockInstance.fire('viewState.currentPageIndex.change', 2)
expect(track).toHaveBeenCalled()
expect(track).toHaveBeenCalledWith(expected)
done()
}, 1000)
expect(track).not.toHaveBeenCalled()
})
expect(track).toHaveBeenCalledWith(expected) fails with:
Expected mock function to have been called with:
{"action": "PublicationPage", "category": "PublicationPage", "intervalInMilliseconds": 1000, "label": "7", "name": "n/a"}
as argument 1, but it was called with
{"action": "PublicationPage", "category": "PublicationPage", "intervalInMilliseconds": 1001, "label": "7", "name": "n/a"}
I have looked at jest-extended
but I do not see anything useful for my use-case.
EDIT: I want to highlight that all of the answers here are very useful and you can pick whichever suit your use-case. Thank you all - these answers are great!
This can be done with asymmetric matchers (introduced in Jest 18)
expect(track).toHaveBeenCalledWith(
expect.objectContaining({
"action": "PublicationPage",
"category": "PublicationPage",
"label": "7",
"name": "n/a"
})
)
If you use jest-extended you can do something like
expect(track).toHaveBeenCalledWith(
expect.objectContaining({
"action": "PublicationPage",
"category": "PublicationPage",
"label": "7",
"name": "n/a",
"intervalInMilliseconds": expect.toBeWithin(999, 1002)
})
)
You can access the expected object for a better assertion using track.mock.calls[0][0] (the first [0] is the invocation number, and the second [0] is the argument number). Then you could use toMatchObject to find partially match the object, avoiding the dynamic parameters such as intervalInMilliseconds.
To re-iterate the comment by cl0udw4lk3r as I found this the most useful in my scenario:
If you have a method that accepts multiple parameters (not an object) and you only want to match some of these parameters then you can use the expect object.
Example
method I want to test:
client.setex(key, ttl, JSON.stringify(obj));
I want to ensure the correct values are passed into the key and ttl but I'm not concerned what the object passed in is. So I set up a spy:
const setexSpy = jest.spyOn(mockClient, "setex");
and I can then expect this scenario thus:
expect(setexSpy).toHaveBeenCalledWith('test', 99, expect.anything());
You can also use more strongly typed calls using expect.any (expect.any(Number)) etc.
Of course I'm biased but I think this is the best and cleanest way. You can use the spread operator ... to expand the object you are checking then overwrite (or add) one or more values.
Here is an example showing how to overwrite the "intervalInMilliseconds" expected value to any Number
const track = jest.fn()
const expected = new PayloadTiming({
category: 'PublicationPage',
action: 'PublicationPage',
name: 'n/a',
label: '7',
intervalInMilliseconds: 1000 // or around
})
expect(track).toHaveBeenCalledWith(
{
...expected,
intervalInMilliseconds: expect.any(Number)
})
another example showing how to overwrite two values
expect(track).toHaveBeenCalledWith(
{
...expected,
intervalInMilliseconds: expect.any(Number),
category: expect.any(String)
})
Related
I have a document that could be written to from many different concurrent requests.. the same section of the document isn't altered, but it could see concurrent writes (from a nodejs app).
example:
{
name: "testing",
results: {
a: { ... },
b: { ... },
}
I could update the document with "c", etc etc.
If I don't async await the transactions (in a test, for example), I will get partial writes and an error "transaction was aborted due to detection of concurrent modification" .. What's the best way to go about this? I feel like Fauna's main selling point is dealing with issues like this, but I don't have enough knowledge to understand my way around it.
Anyone have any queue strategies/ideas/suggestions?
index:
CreateIndex({
"name": "byName",
"unique": true,
"source": Collection("Testing"),
"serialized": true,
"terms":
[
{ "field": [ "data", "name" ] }
]
})
JS AWS Lambda function is what is doing the writing..
Currently the unit of transaction in Fauna is the document. So in this case I'd recommend something like the following:
CreateCollection({name: "result"})
CreateCollection({name: "sub-result"})
CreateIndex({
name: "result-agg",
source: Collection("sub-result"),
terms: [{"field": ["data", "parent"]}]
})
Assuming parent contained the ref of the main result. Then given $ref as a result ref
Let({
subs: Select("data", Map(Paginate(Match(Index("result-agg"), $ref)), Lambda("x", Get(Var("x")))))
main: Select("data", Get($ref))},
Merge(Var("main"), {results: Var("subs")})
)
I'm having a hard time understanding why I keep getting 0 results back from a query I am trying to perform. Basically I am trying to return only results within a date range. On a given table I have a createdAt which is a DateTime scalar. This basically gets automatically filled in from prisma (or graphql, not sure which ones sets this). So on any table I have the createdAt which is a DateTime string representing the DateTime when it was created.
Here is my schema for this given table:
type Audit {
id: ID! #unique
user: User!
code: AuditCode!
createdAt: DateTime!
updatedAt: DateTime!
message: String
}
I queried this table and got back some results, I'll share them here:
"getAuditLogsForUser": [
{
"id": "cjrgleyvtorqi0b67jnhod8ee",
"code": {
"action": "login"
},
"createdAt": "2019-01-28T17:14:30.047Z"
},
{
"id": "cjrgn99m9osjz0b67568u9415",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T18:06:03.254Z"
},
{
"id": "cjrgnhoddosnv0b67kqefm0sb",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T18:12:35.631Z"
},
{
"id": "cjrgnn6ufosqo0b67r2tlo1e2",
"code": {
"action": "login"
},
"createdAt": "2019-01-28T18:16:52.850Z"
},
{
"id": "cjrgq8wwdotwy0b67ydi6bg01",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T19:29:45.616Z"
},
{
"id": "cjrgqaoreoty50b67ksd04s2h",
"code": {
"action": "adminLogin"
},
"createdAt": "2019-01-28T19:31:08.382Z"
}]
Here is my getAuditLogsForUser schema definition
getAuditLogsForUser(userId: String!, before: DateTime, after: DateTime): [Audit!]!
So to test I would want to get all the results in between the last and first.
2019-01-28T19:31:08.382Z is last
2019-01-28T17:14:30.047Z is first.
Here is my code that would inject into the query statement:
if (args.after && args.before) {
where['createdAt_lte'] = args.after;
where['createdAt_gte'] = args.before;
}
console.log(where)
return await context.db.query.audits({ where }, info);
In playground I execute this statement
getAuditLogsForUser(before: "2019-01-28T19:31:08.382Z" after: "2019-01-28T17:14:30.047Z") { id code { action } createdAt }
So I want anything that createdAt_lte (less than or equal) set to 2019-01-28T17:14:30.047Z and that createdAt_gte (greater than or equal) set to 2019-01-28T19:31:08.382Z
However I get literally no results back even though we KNOW there is results.
I tried to look up some documentation on DateTime scalar in the graphql website. I literally couldn't find anything on it, but I see it in my generated prisma schema. It's just defined as Scalar. With nothing else special about it. I don't think I'm defining it elsewhere either. I am using Graphql-yoga if that makes any difference.
(generated prisma file)
scalar DateTime
I'm wondering if it's truly even handling this as a true datetime? It must be though because it gets generated as a DateTime ISO string in UTC.
Just having a hard time grasping what my issue could possibly be at this moment, maybe I need to define it in some other way? Any help is appreciated
Sorry I misread your example in my first reply. This is what you tried in the playground correct?
getAuditLogsForUser(
before: "2019-01-28T19:31:08.382Z",
after: "2019-01-28T17:14:30.047Z"
){
id
code { action }
createdAt
}
This will not work since before and after do not refer to time, but are cursors used for pagination. They expect an id. Since id's are also strings this query does not throw an error but will not find anything. Here is how pagination is used: https://www.prisma.io/docs/prisma-graphql-api/reference/queries-qwe1/#pagination
What I think you want to do is use a filter in the query. For this you can use the where argument. The query would look like this:
getAuditLogsForUser(
where:{AND:[
{createdAt_lte: "2019-01-28T19:31:08.382Z"},
{createdAt_gte: "2019-01-28T17:14:30.047Z"}
]}
) {
id
code { action }
createdAt
}
Here are the docs for filtering: https://www.prisma.io/docs/prisma-graphql-api/reference/queries-qwe1/#filtering
OK so figured out it had to do with the fact that I used "after" and "before" as an argument variable. I have no clue why this completely screws everything up, but it just wont return ANY results if you have this as a argument. Very strange. Must be abstracting some other variable somehow, probably a bug on graphql's end.
As soon as I tried a new variable name, viola, it works.
This is also possible:
const fileData = await prismaClient.fileCuratedData.findFirst({
where: {
fileId: fileId,
createdAt: {
gte: fromdate}
},
});
I have a query that is generated in my Node backend - if I log it out and run it in Mongo shell then all is fine, however, if I use mongoose to do Model.find(query), then some strange property re-ordering takes place and the query breaks.
The query in question is:
{
"attributes": {
"$all": [
{
"attribute": "an id",
"value": "a value",
"deletedOn": null
},
{
"attribute": "an id again",
"value": "a value",
"deletedOn": null
}
]
}
}
However, the output from mongoose debug is:
users.find({
attributes: {
'$all': [
{
deletedOn: null,
attribute: 'an id',
value: 'a value'
},
{
deletedOn: null,
attribute: 'an id again',
value: 'a value'
}
]
}
},
{ fields: {} }
)
The only change is the shifting of the deletedOn field from last position to first position in the object. This means the query returns no results.
Are there any solutions to this issue?
Object properties in JavaScript are not ordered. You cannot ensure the order of properties on a JavaScript object and different implementations may order them differently. See this answer on a related question for some other info.
The essential key is that from the spec (ECMAScript) we get: "An object is a member of the type Object. It is an unordered collection of properties each of which contains a primitive value, object, or function. A function stored in a property of an object is called a method."
There is no "solution", because this is expected behavior. So the real question is, why does the order matter to you? What are you trying to do?
Adding on the previous answer, if order is important to you, you should use array instead of objects.
for example:
"$all": [
[
{"attribute": "an id"},
{"value": "a value"},
{"deletedOn": null},
],
...etc.
]
bool seems to work as expected, as does json type (postgres), but all of my id cols populate as strings (breaking the front-end code)
Is there a way to a. fix it, or b. tell bookshelf that that field is an integer?
Update
By request, here's some code snippets. I'm just testing the waters with node/bookshelf, so this isn't complicated code; it's mostly right out of the getting started guide. The database is an existing one we've been using for 2ish years, the id cols are definitely int for all tables
One good example, Calendars and
var Appointment = bs.Model.extend({
tableName: 'ec__appointments',
});
var Calendar = bs.Model.extend({
tableName: 'ec__calendars',
appointments: function() {
return this.hasMany(Appointment, 'calendar_id');
},
});
For this one, the calendar ids come down as int, but when I fetch({withRelated:['appointments']}), the appointment.id is a string.
{
"calendars": [
{
"id": 2,
"name": "Default Calendar",
"created_at": "2015-03-06T09:35:58.000Z",
"updated_at": "2016-03-23T03:28:07.000Z",
"appointments": [
{
"id": "107",
"calendar_id": "2",
"name": "Test",
"starts_at": null,
"ends_at": null,
"created_at": "2015-05-29T23:13:20.000Z",
"updated_at": "2015-05-29T23:13:20.000Z",
},
You can fix this problem with the following code:
var pg = require('pg');
pg.types.setTypeParser(20, 'text', parseInt);
More details here: https://github.com/tgriesser/knex/issues/387
I'm trying to understand if it would actually be more efficient to read the entire document from Azure DocumentDb than it is to read a property that may have multiple objects in it?
Let's use this basketball team object as an example:
{
id: 123,
name: "Los Angeles Lakers",
coach: "Byron Scott",
players: [
{ id: 24, name: "Kobe Bryant" },
{ id: 3, name: "Anthony Brown" },
{ id: 4, name: "Ryan Kelly" },
]
}
If I want to get only a list of players, is it more efficient/faster for me to read the entire team document from which I can extract the players OR is it better to send SQL statement and try to read only the players from the document?
Returning only the players will be more efficient on the network, as you're returning less data. And, you should also be able to look at the Request Units burned for your query.
For example, I put your document into one of my collections and ran two queries in the portal (and if you do the same, and look at the bottom of the portal, you'll see the resulting Request Unit cost). I slightly modified your document with unique ID and quotes around everything, so I could load it via the portal:
{
"id": "basketball123",
"name": "Los Angeles Lakers",
"coach": "Byron Scott",
"players": [
{ "id": 24, "name": "Kobe Bryant" },
{ "id": 3, "name": "Anthony Brown" },
{ "id": 4, "name": "Ryan Kelly" }
]
}
I first selected just player data:
SELECT c.players FROM c where c.id="basketball123"
with an RU cost of 2.2:
I then asked for the entire document:
SELECT * FROM c where c.id="basketball123"
with an RU cost of 2.24:
Note: Your document size is very small, so there's really not much difference here. But at least you can see that returning a subset costs less than returning the entire document.