How to implement a selector in easy: search for Meteor, using React instead of Blaze - search

I'm trying to follow the documentation and examples to add a server-side selector to a search function in my Meteor app, implemented using the Easy Search plugin. The end goal is to ensure that only documents the user has permission to see are returned by searching.
I can see a selector working in the Leaderboard example, but I can't get it to work in my code.
Versions:
Meteor 1.7.0.1
easy:search#2.2.1
easysearch:components#2.2.2
easysearch:core#2.2.2
I modified the Meteor 'todos' example app to demonstrate the problem, and my demo code is in a repo.
NOTE! to demonstrate the problem, you need to create an account in the demo app, then create a list and make it private. This add the 'userId' field to the list.
Then you can search for the name of the list, by typing in the search box near the top of the main section; search results are written to the browser console.
The first problem is that if I copy the code from the example in the documentation, I see a server error 'searchObject is not defined:
copied from docs, causes an error: imports/api/lists/lists.js
export const MyIndex = new Index({
'collection': Lists,
'fields': ['name'],
engine: new MongoDBEngine({
selector(searchDefinition, options, aggregation) {
// retrieve the default selector
const selector = this.defaultConfiguration()
.selector(searchObject, options, aggregation)
// options.search.userId contains the userId of the logged in user
selector.userId = options.search.userId
return selector
},
}),
});
It seems there is an error in the docs.
Working instead from the leaderboard example, the code below runs but intermittently returns no results. For example if I have a list called "My list", and I type the search term 's', sometimes the list is returned from the search and sometimes it is not. If I use the MiniMongo engine it all works perfectly.
index selector {"$or":[{"name":{"$regex":".*my.*","$options":"i"}}],"userId":"Wtrr5FRHhkKuAcrLZ"}
client and server: imports/api/lists/lists.js
export const MyIndex = new Index({
'collection': Lists,
'fields': ['name'],
'engine': new MongoDBEngine({
selector: function (searchObject, options, aggregation) {
let selector = this.defaultConfiguration().selector(searchObject, options, aggregation);
selector.userId = options.search.userId;
console.log('index selector', JSON.stringify(selector));
return selector;
}
}),
permission: () => {
return true;
},
});
client: imports/ui/components/lists-show.js
Template.Lists_show.events({
'keyup #search'(event) {
console.log('search for ', event.target.value);
const cursor = MyIndex.search(event.target.value);
console.log('count',cursor.count());
console.log('results', cursor.fetch());
},
});
client: imports/ui/components/lists-show.html
<input id="search" type="text" placeholder="search..." />
Edit: I think the problem is that while the Minimongo engine runs on the client, the MongoDBEngine runs on the server and there are timing issues with the results. The docs show using Tracker.autorun, but that's not a natural fit with my React / Redux app. I'll post an answer if I manage to figure something out - I can't be the only person trying to do something like this.

I got it working in my React / Redux / Meteor app. Things to note:
The cursor MyIndex.search(searchTerm) is a reactive data source - you can't just use it as a return value. When searching on the client with MiniMongo this isn't an issue, but it's important when you use MongoDBEngine to search on the server, because it's asynchronous. In React you can wrap the cursor in withTracker to pass data to the component reactively. In Blaze you would use autorun.tracker. This is shown in the docs but not explained, and it took me a while to understand what was happening.
The docs have an error in the selector example, easily corrected but it's confusing if you have other problems in your code.
With MongoDBEngine, 'permission' must be specified - it does not default to 'true'. Without it, you will see no results.
Writing out the default selector object to the console let me see how it's constructed, and then create a new selector that returns MyDocs that are either public or created by the user.
My code is below. In case it helps anybody else, I've shown how to search on tags also, which are objects with a name property stored in a collection Tags. Each MyDoc has a 'tags' property which is an array of tag ids. The selector first searches the Tags collection to find tags whose name matches the search term, then selects docs in MyDocs with the ids of those tags in their doc.tags array.
There may be a better way to find the search term, or to structure the Tags search, but this is what I could get working.
On server and client:
import { Index, MongoDBEngine } from 'meteor/easy:search';
export const MyDocs = new Mongo.Collection('mydocs');
export const Tags = new Mongo.Collection('tags');
export const MyIndex = new Index({
'collection': MyDocs,
'fields': ['name'],
'engine': new MongoDBEngine({
'selector': function (searchObject, options, aggregation) {
const selector = this.defaultConfiguration().selector(searchObject, options, aggregation);
console.log('default selector', selector); // this searches on name only
// find docs by tag as well as by name
const searchTerm = searchObject.name;
const matchingTags = Tags.find({ 'name': { '$regex': searchTerm } }).fetch();
const matchingTagIds = matchingTags.map((tag) => tag._id);
selector.$or.push({ 'tags': { '$in': matchingTagIds } });
const newSelector = {
'$and': [
{
'$or': [
{ 'isPublic': { '$eq': true } },
{ 'createdBy': options.search.userId },
],
},
{
'$or': selector.$or,
},
],
};
return newSelector;
},
'fields': (searchObject, options) => ({
'_id': 1,
'createdBy': 1,
'name': 1,
}),
'sort': () => ({ 'name': 1 }),
}),
'permission': () => true,
});
React component in client only code:
import React from 'react';
import { connect } from 'react-redux';
import { withTracker } from 'meteor/react-meteor-data';
import PropTypes from 'prop-types';
import store from '../modules/store';
import {
getSearchTerm,
searchStart,
} from '../modules/search'; // contains Redux actions and partial store for search
import { MyIndex } from '../../modules/collection';
function Search(props) {
// functional React component that contains the search box
...
const onChange = (value) => {
clearTimeout(global.searchTimeout);
if (value.length >= 2) {
// user has entered a search term
// of at least 2 characters
// wait until they stop typing
global.searchTimeout = setTimeout(() => {
dispatch(searchStart(value)); // Redux action which sets the searchTerm in Redux state
}, 500);
}
};
...
// the component returns html which calls onChange when the user types in the search input
// and a list which displays the search results, accessed in props.searchResults
}
const Tracker = withTracker(({ dispatch }) => {
// searchTerm is saved in Redux state.
const state = store.getState();
const searchTerm = getSearchTerm(state); // Redux function to get searchTerm out of Redux state
let results = [];
if (searchTerm) {
const cursor = MyIndex.search(searchTerm); // search is a reactive data source
results = cursor.fetch();
console.log('*** cursor count', cursor.count());
return {
'searchResults': results,
};
})(Search);
export default connect()(Tracker);

Related

how to test function with Fuse.js dependency

I want to implement .test file for the following function:
import Fuse from 'fuse.js';
import data from '../dal/data';
async function search(searchedData) {
// In case searched name is null
if (!searchedData) {
return [];
}
// load data
const dataload = await data();
// set options to filter data
const options = {
includeMatches: true,
// Search in `name'
keys: ['name'],
};
const fuse = new Fuse(dataload, options);
const matchedList = fuse.search(searchedData)
.map((element) => element.item);
return { matchedList };
}
export default search;
How to do for Fuse.js, shall I implement a mock data?
Fuse.js is library that provides fuzzy search based on dynamic configurations pass to it. There are different library which are available such as fuzzy-search.
Now, the question is how you can mock similar library as fuzzy search as per the comment. This is a good example of how you can start building your own fuzzy search engine, but to do so, you need to understand what fuzzy search is and how it works.
Now, if you want to know how Fuse.js works with any mock data. I can add some code which might help you to integrate with your existing project.
Mock Data
// it could be of any type, I am using Array of objects
const mockData = [{
name : 'Jason Durham',
age : 28,
address : 'Newyork city, New York, USA'
}, {
name : 'Jameson Durham',
age : 28,
address : 'Cleveland, Ohio, USA'
},
{
name : 'Tintan Groofs',
age : 28,
address : 'Ohio city, Ohio, USA'
}]
const options = {
includeScore: true, // the o/p will include the ranking based on weights
keys: [
{
name: 'name',
weight: 0.3 // what weightage you want this key in your search result
},
{
name: 'address',
weight: 0.7 // what weightage you want this key in your search result
}
]
}
// Create a new instance of Fuse
const fuse = new Fuse(mockData, options)
// Now search for 'Man'
const result = fuse.search('nwyork');
console.log(result);
You can check fuse.js documentation to check how to advance your search and configurable.
You can test the fuzzy search results using a test framework provided that you know the expected values, but it will contain a very broad set of o/p based on your configuration. As of now Fuse.js doesn't provide direct way in their documentation to check for it, but you can create a method based on the ranking you get and create a threshold value for it. This is the way we are testing our search results using Fuse.js, all the results below this threshold is discarded.

Node.js Testing with Mongoose. unique gets ignored

I'm having a little trouble with an integration test for my mongoose application. The problem is, that my unique setting gets constantly ignored. The Schema looks more or less like this (so no fancy stuff in there)
const RealmSchema:Schema = new mongoose.Schema({
Title : {
type : String,
required : true,
unique : true
},
SchemaVersion : {
type : String,
default : SchemaVersion,
enum: [ SchemaVersion ]
}
}, {
timestamps : {
createdAt : "Created",
updatedAt : "Updated"
}
});
It looks like basically all the rules set in the schema are beeing ignored. I can pass in a Number/Boolean where string was required. The only thing that is working is fields that have not been declared in the schema won't be saved to the db
First probable cause:
I have the feeling, that it might have to do with the way I test. I have multiple integration tests. After each one my database gets dropped (so I have the same condition for every test and precondition the database in that test).
Is is possible that the reason is my indices beeing droped with the database and not beeing reinitiated when the next text creates database and collection again? And if this is the case, how could I make sure that after every test I get an empty database that still respects all my schema settings?
Second probable cause:
I'm using TypeScript in this project. Maybe there is something wrong in defining the Schema and the Model. This is what i do.
1. Create the Schema (code from above)
2. Create an Interface for the model (where IRealmM extends the Interface for the use in mongoose)
import { SpecificAttributeSelect } from "../classes/class.specificAttribute.Select";
import { SpecificAttributeText } from "../classes/class.specificAttribute.Text";
import { Document } from "mongoose";
interface IRealm{
Title : String;
Attributes : (SpecificAttributeSelect | SpecificAttributeText)[];
}
interface IRealmM extends IRealm, Document {
}
export { IRealm, IRealmM }
3. Create the model
import { RealmSchema } from '../schemas/schema.Realm';
import { Model } from 'mongoose';
import { IRealmM } from '../interfaces/interface.realm';
// Apply Authentication Plugin and create Model
const RealmModel:Model<IRealmM> = mongoose.model('realm', RealmSchema);
// Export the Model
export { RealmModel }
Unique options is not a validator. Check out this link from Mongoose docs.
OK i finally figured it out. The key issue is described here
Mongoose Unique index not working!
Solstice333 states in his answer that ensureIndex is deprecated (a warning I have been getting for some time now, I thought it was still working though)
After adding .createIndexes() to the model leaving me with the following code it works (at least as far as I'm not testing. More on that after the code)
// Apply Authentication Plugin and create Model
const RealmModel:Model<IRealmM> = mongoose.model('realm', RealmSchema);
RealmModel.createIndexes();
Now the problem with this will be that the indexes are beeing set when you're connection is first established, but not if you drop the database in your process (which at least for me occurs after every integration test)
So in my tests the resetDatabase function will look like this to make sure all the indexes are set
const resetDatabase = done => {
if(mongoose.connection.readyState === 1){
mongoose.connection.db.dropDatabase( async () => {
await resetIndexes(mongoose.models);
done();
});
} else {
mongoose.connection.once('open', () => {
mongoose.connection.db.dropDatabase( async () => {
await resetIndexes(mongoose.models);
done();
});
});
}
};
const resetIndexes = async (Models:Object) => {
let indexesReset: any[] = [];
for(let key in Models){
indexesReset.push(Models[key].createIndexes());
}
Promise.all(indexesReset).then( () => {
return true;
});
}

Cursor Pagination with Apollo/GraphQL keeps giving me error

I'm trying to implement cursor pagination and followed the examples in the doc but I keep getting an error saying 'Cannot query field "cursor" on type "Query"'.
I'm aware that the "cursor" field doesn't actually exist on the Accounts schema...but from what I'm reading from the docs.. you have to include it somewhere in the gql`` query. Furthermore, not sure if I'm missing anything but I'm a bit confused of how to structure my query to allow cursor pagination.
Original Query: (running this gives me no error)
const AccountsQuery = gql`
query {
accounts {
accountName
accountId
}
}
`;
New Query: (this gives "cannot find cursor field on accounts" error)
const AccountsQuery = gql`
query Accounts($cursor: String){
accounts(cursor: $cursor) {
cursor
accountName
accountId
}
}
`;
GraphQL wrapper:
export default graphql(AccountsQuery, {
props: ({ data: { loading, cursor, accounts, fetchmore } }) => ({
loading,
accounts,
loadMoreEntries() {
return fetchmore({
query: AccountsQuery,
variables: {
cursor: cursor,
},
updateQuery: (previousResult, { fetchMoreResult }) => {
const previousEntry = previousResult.entry;
const newAccounts = fetchMoreResult.accounts;
return {
cursor: fetchMoreResult.cursor,
entry: {
accounts: [...newAccounts, ...previousEntry]
},
};
},
})
}
})
})(QuickViewContainer);
Any help would be appreciated to getting cursor pagination working!
Sounds like the cursor field isn't getting implemented on the server. Your Account type needs to have that field like so:
Account {
cursor
accountName
accountId
}
For a convention on how to do cursor pagination, you should follow the standard Relay spec. You can read more about how it's implemented here in a Relay-compliant GraphQL API.
This would make your query look like this:
query {
viewer {
allAccounts {
edges {
cursor
node {
accountName
accountId
}
}
}
}
}
Each edge account has a cursor that corresponds to a node and will be auto-populated with globally-unique opaque cursor IDs from the server.
Hope this helps!

Securing access to collection from the client's side

I have a meteor app prototype that works well, but is very insecure as of now: I needed to display a list of matching users to the currently logged-in user. For starters, I decided to publish all users, limiting the fields to what I would need to filter the user list on the client side.
Meteor.publish('users', function () {
return Meteor.users.find({}, {
fields: {
'profile.picture': 1,
'profile.likes': 1,
'profile.friends': 1,
'profile.type': 1
}
});
});
Then in my router, I would do a request to only show what I wanted on the client side:
Router.map(function() {
this.route('usersList', {
path: '/users',
waitOn: function () {
return Meteor.subscribe('users');
},
data: function () {
var user = Meteor.user();
return {
users: Meteor.users.find({ $and: [
{_id: {$ne : user._id}},
{'profile.type': user.profile.interest}
]})
};
}
});
});
In the code above, I query all users who are not the current user and whose type correspond the current user's interest. I also display a certain border on the photos of users who have my user in their "profile.friends" array, using this client helper:
Template.userItem.helpers({
getClass: function(userId) {
var user = Meteor.user();
var lookedup = Meteor.users.findOne({_id: userId});
if ($.inArray(user._id, lookedup.profile.friends) !== -1)
return "yes";
return "no";
}
});
Now this all worked great, but with this setup, every client can query every users and get their type, picture, list of friends and number of likes. If I was in an MVC, this info would only be accessible on server side. So I decided my next iteration to be a security one. I would move my query from the router file to the publications file. That's where trouble began...
Meteor.publish('users', function () {
var user = Meteor.users.findOne({_id: this.userId});
var interest = user.profile.interest;
// retrieve all users, with their friends for now
allUsers = Meteor.users.find({ $and: [
{'_id': {$ne: user._id}},
{'profile.type':interest}
]},
{ fields: {'profile.picture': 1, 'profile.friends': 1}}
);
return allUsers;
});
And in the router:
Router.map(function() {
this.route('usersList', {
path: '/users',
waitOn: function () {
return Meteor.subscribe('users');
},
data: function () {
var user = Meteor.user();
return {users: Meteor.users.find({_id: {$ne : user._id}})};
}
});
});
(note that I still need to exclude the current user from the router query since the current user is always fully published)
This works, but:
the user list does not get updated when I change the user interest and then do a Router.go('usersList'). Only when I refresh the browser, my list is updated according to the user's new interest. No idea why.
this solution still publishes the users' friends in order to display my matching borders. I wish to add a temporary field in my publish query, setting it to "yes" if the user is in the user's friends and "no" otherwise, but... no success so far. I read I could use aggregate to maybe achieve that but haven't managed to so far. Also, aggregate doesn't return a cursor which is what is expected from a publication.
This problem makes me doubt about the praises about meteor being suitable for secure apps... This would be so easy to achieve in Rails or others!
EDIT: As requested, here is the code I have so far for the transition of my "matching" check to the server:
Meteor.publish('users', function () {
var user = Meteor.users.findOne({_id: this.userId});
var interest = user.profile.interest;
// retrieve all users, with their friends for now
allUsers = Meteor.users.find({ $and: [
{'_id': {$ne: user._id}},
{'profile.type':interest}
]},
{ fields: {'profile.picture': 1, 'profile.friends': 1}}
);
// ------------- ADDED ---------------
allUsers.forEach(function (lookedup) {
if (_.contains(lookedup.profile.friends, user._id))
lookedup.profile.relation = "yes";
else
lookedup.profile.relation = "no";
lookedup.profile.friends = undefined;
return lookedup;
});
// ------------- END ---------------
return allUsers;
});
Obviously this code doesn't work at all, since I cannot modify cursor values in a foreach loop. But it gives an idea of what I want to achieve: give a way to the client to know if a friend is matched or not, without giving access to the friend lists of all users to the client. (and also, avoiding having to do one request per each user during display to ask the server if this specific user matches this specific one)
You can add a transform function and modify a cursor docs on the fly
meteor Collection.find

Fetch Backbone collection with search parameters

I'd like to implement a search page using Backbone.js. The search parameters are taken from a simple form, and the server knows to parse the query parameters and return a json array of the results. My model looks like this, more or less:
App.Models.SearchResult = Backbone.Model.extend({
urlRoot: '/search'
});
App.Collections.SearchResults = Backbone.Collection.extend({
model: App.Models.SearchResult
});
var results = new App.Collections.SearchResults();
I'd like that every time I perform results.fetch(), the contents of the search form will also be serialized with the GET request. Is there a simple way to add this, or am I doing it the wrong way and should probably be handcoding the request and creating the collection from the returned results:
$.getJSON('/search', { /* search params */ }, function(resp){
// resp is a list of JSON data [ { id: .., name: .. }, { id: .., name: .. }, .... ]
var results = new App.Collections.SearchResults(resp);
// update views, etc.
});
Thoughts?
Backbone.js fetch with parameters answers most of your questions, but I put some here as well.
Add the data parameter to your fetch call, example:
var search_params = {
'key1': 'value1',
'key2': 'value2',
'key3': 'value3',
...
'keyN': 'valueN',
};
App.Collections.SearchResults.fetch({data: $.param(search_params)});
Now your call url has added parameters which you can parse on the server side.
Attention: code simplified and not tested
I think you should split the functionality:
The Search Model
It is a proper resource in your server side. The only action allowed is CREATE.
var Search = Backbone.Model.extend({
url: "/search",
initialize: function(){
this.results = new Results( this.get( "results" ) );
this.trigger( "search:ready", this );
}
});
The Results Collection
It is just in charge of collecting the list of Result models
var Results = Backbone.Collection.extend({
model: Result
});
The Search Form
You see that this View is making the intelligent job, listening to the form.submit, creating a new Search object and sending it to the server to be created. This created mission doesn't mean the Search has to be stored in database, this is the normal creation behavior, but it does not always need to be this way. In our case create a Search means to search the DB looking for the concrete registers.
var SearchView = Backbone.View.extend({
events: {
"submit form" : "createSearch"
},
createSearch: function(){
// You can use things like this
// http://stackoverflow.com/questions/1184624/convert-form-data-to-js-object-with-jquery
// to authomat this process
var search = new Search({
field_1: this.$el.find( "input.field_1" ).val(),
field_2: this.$el.find( "input.field_2" ).val(),
});
// You can listen to the "search:ready" event
search.on( "search:ready", this.renderResults, this )
// this is when a POST request is sent to the server
// to the URL `/search` with all the search information packaged
search.save();
},
renderResults: function( search ){
// use search.results to render the results on your own way
}
});
I think this kind of solution is very clean, elegant, intuitive and very extensible.
Found a very simple solution - override the url() function in the collection:
App.Collections.SearchResults = Backbone.Collection.extend({
urlRoot: '/search',
url: function() {
// send the url along with the serialized query params
return this.urlRoot + "?" + $("#search-form").formSerialize();
}
});
Hopefully this doesn't horrify anyone who has a bit more Backbone / Javascript skills than myself.
It seems the current version of Backbone (or maybe jQuery) automatically stringifies the data value, so there is no need to call $.param anymore.
The following lines produce the same result:
collection.fetch({data: {filter:'abc', page:1}});
collection.fetch({data: $.param({filter:'abc', page:1})});
The querystring will be filter=abc&page=1.
EDIT: This should have been a comment, rather than answer.

Resources