Getstream.io chat message search syntax - getstream-io

I am trying to implement a message search with getstream.io's chat product.
The docs have this example:
const filters = { members: { $in: ['john'] } };
const search = await client.search(
filters,
'supercalifragilisticexpialidocious',
{ limit: 2, offset: 0 },
);
When I implement this, will get back messages that exactly match supercalifragilisticexpialidocious by user john, but based on testing in my app, I will not get John's messages that contain the string, for example: supercalifragilisticexpialidocious is a fun thing to say.
Is message search only set up for exact matches? Or is there a different syntax for that?

Search works fine. This question is moot.

Related

Search string value inside an array of objects inside an object of the jsonb column- TypeORM and Nest.js

the problem I am facing is as follows:
Search value: 'cooking'
JSON object::
data: {
skills: {
items: [ { name: 'cooking' }, ... ]
}
}
Expected result: Should find all the "skill items" that contain 'cooking' inside their name, using TypeORM and Nest.js.
The current code does not support search on the backend, and I should implement this. I want to use TypeORM features, rather than handling it with JavaScript.
Current code: (returns data based on the userId)
const allItems = this.dataRepository.find({ where: [{ user: { id: userId } }] })
I investigated the PostgreSQL documentation regarding the PostgreSQL functions and even though I understand how to create a raw SQL query, I am struggling to convert this to the TypeORM equivalent.
Note: I researched many StackOverflow issues before creating this question, but do inform me If I missed the right one. I will be glad to investigate.
Can you help me figure out the way to query this with TypeORM?
UPDATE
Let's consider the simple raw query:
SELECT *
FROM table1 t
WHERE t.data->'skills' #> '{"items":[{ "name": "cooking"}]}';
This query will provide the result for any item within the items array that will match exact name - in this case, "cooking".
That's totally fine, and it can be executed as a raw request but it is certainly not easy to maintain in the future, nor to use pattern matching and wildcards (I couldn't find a solution to do that, If you know how to do it please share!). But, this solution is good enough when you have to work on the exact matches. I'll keep this question updated with the new findings.
use Like in Where clause:
servicePoint = await this.servicePointAddressRepository.find({
where: [{ ...isActive, name: Like("%"+key+"%"), serviceExecutive:{id: userId} },
{ ...isActive, servicePointId: Like("%"+key+"%")},
{ ...isActive, branchCode: Like("%"+key+"%")},
],
skip: (page - 1) * limit,
take: limit,
order: { updatedAt: "DESC" },
relations:["serviceExecutive","address"]
});
This may help you! I'm matching with key here.

Loopback 4: Filter option

Loopback 4 allows to use an automatic/default filter that is really helpfull. My problem is that I want to use it in a customize way an I am not able.
Usual:
return this.customerRepository.findById(idCustomer, filter);
My Case:
I donĀ“t want to attack the "id" of the model, I want to attack another field. I have tried serveral things, but as a resume an example:
return this.customerRepository.findOne({where: { idUser: currentUserProfile.id }}, filter));
If I do that, the filter stop working. Any idea of how to mix a field in the model different than the id and the filter of loopback 4?
Thanks in advance
Best regards
#Jota what do you mean the filter stopped working? You have the correct idea about the approach. to search by a specific field just put it in the where clause as such:
this.customerRepository.findOne({ where: { <field_name>: <value_to_search> } })
e.g.
const filter = {
where: {
email: 'hello#world.com'
}
};
const result = await this.customerRepository.findOne(filter);

How can I only call my data source on time if I have two resolvers that rely on it with Apollo?

I have a restaurant query that returns info on my restaurant. The general information that most of my consumers want comes back from restaurant-general-info.com. There are additional fields though that my consumer might want to know, restaurant-isopen.com, which provides whether or not the restaurant is currently open and the hours that it is open.
I have written two property specific resolvers to handle isOpen and hours as show below:
type Query {
restaurant(name: String): Restaurant
}
type Restaurant {
name: String!,
address: String!,
ownerName: String!,
isOpen: Boolean!,
hours: String!
}
Query: {
restaurant: async(parent, {name}) => {
const response = await axios.get(`https://restaurant-general-info.com/${name}`);
return {
address: response.address
ownerName: response.owner
};
}
}
Restaurant: {
isOpen: async(parent) => {
const response = await axios.get(`https://restaurant-isopen.com/${name}`);
return response.openNow;
},
hours: async(parent) => {
const response = await axios.get(`https://restaurant-isopen.com/${name}`);
return response.hoursOfOperation;
}
}
The problem is that isOpen and hours share the same data source. So I don't want to make the same call twice. I know I could make a property like "open-info" that contains two properties, isOpen and hours, but I don't want the consumers of my graph to need to know / think about how that info is separated differently.
Is there anyway I can have a resolver that could handle multiple fields?
ex.
isOpen && hours: async(parent) => {
const response = await axios.get(`https://restaurant-isopen.com/${name}`);
return {
isOpen: response.openNow,
hours: response.hoursOfOperation
}
},
or is there some smarter way of handling this?
Note: The APIs are not real
This is a classic situation where using DataLoader is going to help you out a lot.
Here is the JS library, along with some explanations: https://github.com/graphql/dataloader
In short, implementing the DataLoader pattern allows you to batch requests, helping you to avoid the classic N+1 problem in GraphQL and mitigate overfetching, whether that involves your database, querying other services/APIs (as you are here), etc.
Setting up DataLoader and batching the keys you're requesting (in this case, openNow and hoursOfOperation) will ensure that the axios GET request will only fire once.
Here is a great Medium article to help you visualize how this works behind the scenes: https://medium.com/#__xuorig__/the-graphql-dataloader-pattern-visualized-3064a00f319f

Fuse.js: Exact name match

I am trying to implement an exact name match on a database.
Is there a way to get only "Smith", and not "Smithee", "Smithers", "Smithe" etc? Setting the Distance and Threshold to 0 do not do it. I can of course go through the results once they have appeared and take out the unwanted values, but it would be more efficient to do it in one take.
(Hopefully you're on the latest version of Fuse.js)
If your data looks like something like this:
const list = [{ name: 'Smith' } /*, etc...*/]
You could use extended search:
const fuse = new Fuse(list, {
keys: ['name'],
useExtendedSearch: true
})
// Search for items that exactly match "smith"
fuse.search('=smith')

Creating and pushing to an array with MongoDB

I'm trying to make a messaging system that writes each message to a mongo entry. I'd like the message entry to reflect the user that sends the message, and the actual message content. This is the message schema:
const MessageSchema = new Schema({
id: {
type: String,
required: true
},
messages: {
type: Array,
required: true
},
date: {
type: Date,
default: Date.now
}
});
And this is where I either create a new entry, or append to an existing one:
Message.findOne({ id: chatId }).then(message => {
if(message){
Message.update.push({ messages: { 'name': user.name, 'message': user.message } })
} else {
const newMessage = new Message(
{ id: chatId },
{ push: { messages: { 'name': user.name, 'message': user.message } } }
)
newMessage
.save()
.catch(err => console.log(err))
}
})
I'd like the end result to look something like this:
id: '12345'
messages: [
{name: 'David', message: 'message from David'},
{name: 'Jason', message: 'message from Jason'},
etc.
]
Is something like this possible, and if so, any suggestions on how to get this to work?
This questions contains lots of topics (in my mind at least). I really want to try to break this questions to its core components:
Design
As David noted (first comment) there is a design problem here - an ever-growing array as a sub document is not ideal (please refer to this blog post for more details).
On the over hand - when we imagine how a separate collection of messages will looks like, it will be something like this:
_id: ObjectId('...') // how do I identify the message
channel_id: 'cn247f9' // the message belong to a private chat or a group
user_id: 1234 // which user posted this message
message: 'hello or something' // the message itself
Which is also not that great because we are repeating the channel and user ids as a function of time. This is why the bucket pattern is used
So... what is the "best" approach here?
Concept
The most relevant question right now is - "which features and loads this chat is suppose to support?". I mean, many chats are only support messages display without any further complexity (like searching inside a message). Keeping that in mind, there is a chance that we store in our database an information that is practically irrelevant.
This is (almost) like storing a binary data (such an image) inside our db. we can do this, but with no actual good reason. So, if we are not going to support a full-text search inside our messages, there is no point to store the messages inside our db.. at all
But.. what if we want to support a full-text search? well - who said that we need to give this task to our database? we can easily download messages (using pagination) and make the search operation on the client side itself (while keyword not found, download previous page and search it), taking the loads out of our database!
So.. it seems like that messages are not ideal for storage in database in terms of size, functionality and loads (you may consider this conclusion as a shocking one)
ReDesign
Using a hybrid approach where messages are stored in a separated collection with pagination (the bucket pattern supports this as described here)
Store messages outside your database (since your are using Node.js you may consider using chunk store), keeping only a reference to them in the database itself
Set your page with a size relevant to your application needs and also with calculated fields (for instances: number of current messages in page) to ease database loads as much as possible
Schema
channels:
_id: ObjectId
pageIndex: Int32
isLastPage: Boolean
// The number of items here should not exceed page size
// when it does - a new document will be created with incremental pageIndex value
// suggestion: update previous page isLastPage field to ease querying of next page
messages:
[
{ userId: ObjectID, link: string, timestamp: Date }
]
messagesCount: Int32
Final Conclusion
I know - it seems like a complete overkill for such a "simple" question, but - Dawid Esterhuizen convinced me that designing your database to support your future loads from the very beginning is crucial and always better than simplifying db design too much
The bottom line is that the question "which features and loads this chat is suppose to support?" is still need to be answered if you intend to desgin your db efficiently (e.g. to find the Goldilocks zone where your design suits your application needs in the most optimal way)

Resources