Adding functions in search index of loudant - search

I have a Json document in cloudant as:
{
"_id": "3-f812228f45b5f4e4962505561953ew245",
"_rev": "3-f812228f45b5f4e496250556195372b2",
"wiki_page": "http://en.wikipedia.org/wiki/African_lion",
"name": "african lion",
"class": "mammal",
"diet": "herbivore"
}
I want to make a search index that can search this document when I input queries as "afrian lion" or "lion african",...
I make a function that can return all cases of permutation in "doc.name" for indexing (This function works well and it also had been checked in pure JS environment). However, it does't work in cloudant, the output return null when i input a query.
This is a code that I made in search index:
function(doc){
var list = [];
function permute(ss, used, res, level, list){
if(level==ss.length&&res!==""){
list.push(res);
return;
}
for(var i=0; i<ss.length; i++){
console.log("loops");
if (used[i]===true){
continue;
}
if(level>=0){
if (res!="" && list.indexOf(res)<0){
list.push(res.trim());
}
used[i]=true;
permute(ss, used, res+" "+ss[i], level+1, list)
used[i]=false;
}
}
}
function permuteword(s){
var ss=s.split(" ");
var used = [];
var res = "";
list = [];
permute(ss, used, res, 0, list);
console.log(list);
}
var contentIndex=[];
contentIndex=permuteword("african lion");
for(var i=0; i<contentIndex.length; i++){
index("default", contentIndex[i]);
}
}
How can i solve the problem?

Update
Your update looks good, but there is still one issue: you are not returning the list from the permuteword function. I believe you also need to remove calls to console.log. Once I did these two things I was able to get it to work with Cloudant using the following search queries (I also changed your hard-coded call with "african lion" back to doc.name):
default:"african"
default:"african lion"
default:"lion"
default:"lion african"
Here is the final script:
function(doc){
var list = [];
function permute(ss, used, res, level, list){
if(level==ss.length&&res!==""){
list.push(res);
return;
}
for(var i=0; i<ss.length; i++){
if (used[i]===true){
continue;
}
if(level>=0){
if (res!="" && list.indexOf(res)<0){
list.push(res.trim());
}
used[i]=true;
permute(ss, used, res+" "+ss[i], level+1, list)
used[i]=false;
}
}
}
function permuteword(s){
var ss=s.split(" ");
var used = [];
var res = "";
list = [];
permute(ss, used, res, 0, list);
return list;
}
if (doc.name) {
var contentIndex=permuteword(doc.name);
for(var i=0; i<contentIndex.length; i++){
index("default", contentIndex[i]);
}
}
}
Updated JSFiddle:
https://jsfiddle.net/14e7L3gw/1/
Original Answer
I believe there are issues with your Javascript. The permuteword function is not returning any results. See this JSFiddle:
https://jsfiddle.net/14e7L3gw/
Note: I added some logging and commented out the call to index. Run with your browser debugger to see the output.
Here is what is happening:
The first call to permuteword calls permute(["african","lion"], [], "", 0, []);
The first if in permuteword fails because level (0) != ss.length() (2) and res == "".
Then the function loops through ss, but never does anything because level = 0.
Ultimately permuteword returns an empty array, so nothing gets indexed.

Related

Sequelize.js afterFind argument explanation

I'm trying to implement an afterFind hook on a model and can't quite figure out what the semantics are. I'll pulled the following together from trial and error using the doc and other StackOverflow questions as guidelines.
My goal is to massage the result (by applying get(plain : true)) and pass the transformed value as the result of the promise. For instance, I'd expect/want this to return an empty result set:
hooks: {
afterFind: function(result, options, fn)
{
result = [];
}
},
but it just causes the request to hang. Documentation says the arguments are pass by reference and doesn't include a return. Other samples imply something like:
hooks: {
afterFind: function(result, options, fn)
{
result = [];
return fn(null, result);
}
},
which doesn't hang, but doesn't change my result set. Not to mention, I've no idea what the magical "fn" argument is/does.
I had a similar problem. This is because when you do a findAll the argument passed to the hook is an array of values instead of a single object. I did this as a workaround -
hooks: {
afterFind: function(result) {
if(result.constructor === Array) {
var arrayLength = result.length;
for (var i = 0; i < arrayLength; i++) {
result[i].logo = "works";
}
} else {
result.logo = "works";
}
return result;
}
}
In the above code I change the logo attribute of the record(s) after finding it.

Get records by page wise in Netsuite using RESTlet

i want to get all the records in particular record type , but i got 1000 only.
This is the code I used.
function getRecords() {
return nlapiSearchRecord('contact', null, null, null);
}
I need two codes.
1) Get whole records at a single time
2) Get the records page wise by passing pageindex as an argument to the getRecords [1st =>0-1000 , 2nd =>1000 - 2000 , ...........]
function getRecords(pageIndex) {
.........
}
Thanks in advance
you can't get whole records at a time. However, you can sort results by internalid, and remember the last internalId of 1st search result and use an additional filter in your next search result.
var totalResults = [];
var res = nlapiSearchRecord('contact', null, null, new nlobjSearchColumn('internalid').setSort()) || [];
lastId = res[res.length - 1].getId();
copyAndPushToArray(totalResult, res);
while(res.length < 1000)
{
res = nlapiSearchRecord('contact', null, ['internalidnumber', 'greaterthan', lastId], new nlobjSearchColumn('internalid').setSort());
copyAndPushToArray(totalResult, res);
lastId = res[res.length - 1].getId();
}
Beware, if the number of records are high you may overuse governance limit in terms of time and usage points.
If you remember the lastId you can write a logic in RESTlet to take id as param and then use that as additional filter to return nextPage.
You can write a logic to get nth pageresult but, you might have to run search uselessly n-1 times.
Also, I would suggest to use nlapiCreateSearch().runSearch() as it can return up to 4000 records
Here is another way to get more than 1000 results on a search:
function getItems() {
var columns = ['internalid', 'itemid', 'salesdescription', 'baseprice', 'lastpurchaseprice', 'upccode', 'quantityonhand', 'vendorcode'];
var searchcolumns = [];
for(var col in columns) {
searchcolumns.push(new nlobjSearchColumn(columns[col]));
}
var search = nlapiCreateSearch('item', null, searchcolumns);
var results = search.runSearch();
var items = [], slice = [], i = 0;
do {
slice = results.getResults(i, i + 1000);
for (var itm in slice) {
var item = {};
for(var col in columns) { item[columns[col]] = slice[itm].getValue(columns[col]); } // convert nlobjSearchResult into simple js object
items.push(item);
i++;
}
} while (slice.length >= 1000);
return items;
}

mocking the populate method using mockgoose for mongoose (mongodb library for node.js) is null

Having trouble debugging an issue that mockgoose has for populating a property with fields set. Yads mockgoose http://github.com/yads/Mockgoose fork solved the bug of making the populate option work, but if you specify fields it returns a null for the populated property. I tried looking through the source code and stepping through with the debugger but not sure where to look. I can see in the debugger that the populate option triggers a call to get the child element - and I see the call made returns the right child result with the correct fields, but when the parent element finally comes back it has the property to the child element set to null.
The query:
Posts.findById(foo).populate('createdBy', {fname:1, lname:1});
Incorrectly returns a post with post.createdBy = null. Omitting the fields parameter of fame, lname, somehow makes it work again with post.createdBy returning the full object.
Following are some excerpts from the code - though I'm not sure those are the right places to look.
collections.js
this.find = function (conditions, options, callback) {
var results;
var models = db[name];
if (!_.isEmpty(conditions)) {
results = utils.findModelQuery(models, conditions);
} else {
results = utils.objectToArray(utils.cloneItems(models));
}
results = filter.applyOptions(options, results);
if (results.name === 'MongoError') {
callback(results);
} else {
var result = {
toArray: function (callback) {
callback(null, results);
}
};
callback(null, result);
}
};
util.js
function cloneItems(items) {
var clones = {};
for (var item in items) {
clones[item] = cloneItem(items[item]);
}
return clones;
}
function cloneItem(item) {
return _.cloneDeep(item, function(value) {
// Do not clone items that are ObjectId objects as _.clone mangles them
if (value instanceof ObjectId) {
return new ObjectId(value.toString());
}
});
}
And here's a conversation about the issue
https://github.com/mccormicka/Mockgoose/pull/90

NodeJS V8 Pass additional argument to callback

I write an application using NodeJS, ExpressJS, MongoDB (with Mongoose)...
Everything work perfectly, but, when I have a loop for fetch records and do something with the results, like this:
for(var i = 0; i < 10; i++) {
recods.findOne({number: i}, function(err,doc){
...
});
}
The variable "i" in the scope of callback function is passed by reference and the result is not the desired.
When the callback is called the loop has already run and the variable has changed.
If I try to pass argument as anonymous function, does not work, because it replace the needed arguments:
for(var i = 0; i < 10; i++) {
records.findOne({number: i}, (function(err,doc){
...
})(i));
}
In this way, I lost the "err,doc" arguments,
What can I do to solve this big problem?
You can bind it to your callback to create a partial function with its first argument set to i:
for (var i = 0; i < 10; i++) {
records.findOne({number: i}, function(i, err, doc) {
...
}.bind(records, i));
}
You're applying an anonymous function in the wrong place. It should be applied outside of the function that uses i, not to the callback function.
for (var i = 0; i < 10; i++) {
(function(i) {
records.findOne({number: i}, function(err, doc) {
...
});
}(i));
}
While with a few simple fix-ups to capture the value of i in a closure works as shown in the other answers, you might also consider using Mongoose in a different and likely more efficient way:
var numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
records.find( { number : { $in : numbers } }, function(err, allDocs) {
if (err) { throw err; }
// allDocs are now available in an array (they aren't ordered)
// allDocs.length
// allDocs[0].number ...
});
Using the $in operator (reference) makes just one call to the Database, and finds all matching documents, rather than doing individual calls.

Map/Reduce differences between Couchbase & CloudAnt

I've been playing around with Couchbase Server and now just tried replicating my local db to Cloudant, but am getting conflicting results for my map/reduce function pair to build a set of unique tags with their associated projects...
// map.js
function(doc) {
if (doc.tags) {
for(var t in doc.tags) {
emit(doc.tags[t], doc._id);
}
}
}
// reduce.js
function(key,values,rereduce) {
if (!rereduce) {
var res=[];
for(var v in values) {
res.push(values[v]);
}
return res;
} else {
return values.length;
}
}
In Cloudbase server this returns JSON like:
{"rows":[
{"key":"3d","value":["project1","project3","project8","project10"]},
{"key":"agents","value":["project2"]},
{"key":"fabrication","value":["project3","project5"]}
]}
That's exactly what I wanted & expected. However, the same query on the Cloudant replica, returns this:
{"rows":[
{"key":"3d","value":4},
{"key":"agents","value":1},
{"key":"fabrication","value":2}
]}
So it somehow only returns the length of the value array... Highly confusing & am grateful for any insights by some M&R ninjas... ;)
It looks like this is exactly the behavior you would expect given your reduce function. The key part is this:
else {
return values.length;
}
In Cloudant, rereduce is always called (since the reduce needs to span over multiple shards.) In this case, rereduce calls values.length, which will only return the length of the array.
I prefer to reduce/re-reduce implicitly rather than depending on the rereduce parameter.
function(doc) { // map
if (doc.tags) {
for(var t in doc.tags) {
emit(doc.tags[t], {id:doc._id, tag:doc.tags[t]});
}
}
}
Then reduce checks whether it is accumulating document ids from the identical tag, or whether it is just counting different tags.
function(keys, vals, rereduce) {
var initial_tag = vals[0].tag;
return vals.reduce(function(state, val) {
if(initial_tag && val.tag === initial_tag) {
// Accumulate ids which produced this tag.
var ids = state.ids;
if(!ids)
ids = [ state.id ]; // Build initial list from the state's id.
return { tag: val.tag,
, ids: ids.concat([val.id])
};
} else {
var state_count = state.ids ? state.ids.length : state;
var val_count = val.ids ? val.ids.length : val;
return state_count + val_count;
}
})
}
(I didn't test this code, but you get the idea. As long as the tag value is the same, it doesn't matter whether it's a reduce or rereduce. Once different tags start reducing together, it detects that because the tag value will change. So at that point just start accumulating.
I have used this trick before, although IMO it's rarely worth it.
Also in your specific case, this is a dangerous reduce function. You are building a wide list to see all the docs that have a tag. CouchDB likes tall lists, not fat lists. If you want to see all the docs that have a tag, you could map them.
for(var a = 0; a < doc.tags.length; a++) {
emit(doc.tags[a], doc._id);
}
Now you can query /db/_design/app/_view/docs_by_tag?key="3d" and you should get
{"total_rows":287,"offset":30,"rows":[
{"id":"project1","key":"3d","value":"project1"}
{"id":"project3","key":"3d","value":"project3"}
{"id":"project8","key":"3d","value":"project8"}
{"id":"project10","key":"3d","value":"project10"}
]}

Resources