I am storing url slugs from blog titles in MongoDB using Mongoose. So for instances when users enter the same blog title, I want to do it like how Wordpress handles it which is appending a number on the slug and increment it for every duplicates.
Example:
blog-title
blog-title-2
blog-title-3
Also when the slug evaluated from a blog title is blog-title-2 and there is already a blog-title-2 slug, it will be smart enough to append -2 at the back instead of incrementing the number. So it will be blog-title-2-2.
How do I implement that?
I managed to figure out how to implement it myself.
function setSlug(req, res, next) {
// remove special chars, trim spaces, replace all spaces to dashes, lowercase everything
var slug = req.body.title.replace(/[^\w\s]/gi, '').trim().replace(/\s+/g, '-').toLowerCase();
var counter = 2;
// check if there is existing blog with same slug
Blog.findOne({ slug: slug }, checkSlug);
// recursive function for checking repetitively
function checkSlug(err, existingBlog) {
if(existingBlog) { // if there is blog with the same slug
if(counter == 2) // if first round, append '-2'
slug = slug.concat('-' + counter++);
else // increment counter on slug (eg: '-2' becomes '-3')
slug = slug.replace(new RegExp(counter++ + '$', 'g'), counter);
Blog.findOne({ slug: slug }, checkSlug); // check again with the new slug
} else { // else the slug is set
req.body.slug = slug;
next();
}
};
}
I played around with Wordpress for a bit; publishing a lot of test blog posts with weird titles just to see how it handles the title conversion and I implemented the first step in converting the title according to my discoveries. Wordpress:
removes all special characters
trims away leading and ending spaces
converts remaining spaces to single dashes
lowercase everything
In your mongooese model you can give the field the unique option:
var post = new post({
slug: {
type: String,
unique: true
}
});
Then when you save it you can validate and know if the slug is not unique, in which case you can modify the slug:
var post = new post({ slug: "i-am-slug"});
var tempSlug = post.slug
var slugIsUnique = true;
car counter = 1;
do{
error = post.validateSync();
if(// not unique ){ //Make sure to only check for errors on post.slug here
slugIsUnique = false;
counter++;
tempSlug = post.slug + "-" + counter;
}
}while(!slugIsUnique)
post.slug = tempSlug;
post.save((err, post) => {
// check for error or do the usual thing
});
Edit:
This wont take blog-title-2 and output blog-title-2-2 but it will ouput blog-title-2-1 unless that already exists
I found the following code worked for me:
The general idea is to create an array of slugs you may potentially want to use and then use the MongoDB's $in query operator to determine if any of those slugs exist.
Note: The below solutions uses NPM's slugify. This is optional. I also use crytpo for randomizing since it has higher performance. Which is also optional.
private getUniqueSlug(title: string) {
// Returns a promise to allow async code.
return new Promise((resolve, reject) => {
const uniqueSlug: string = "";
// uses npm slugify. You can use whichever library you want or make your own.
const slug: string = slugify(title, { lower: true, strict: true });
// determines if slug is an ObjectID
const slugIsObjId: boolean = (ObjectId.isValid(slug) && !!slug.match(/^[0-9a-fA-F]{24}$/));
// creates a list of slugs (add/remove to your preference)
const slugs: string[] = [];
// ensures slug is not an ObjectID
slugIsObjId ? slugs.push(slug + "(1)") : slugs.push(slug);
slugs.push(slug + "(2)");
slugs.push(slug + "(3)");
slugs.push(slug + "(4)");
slugs.push(slug + "(5)");
// Optional. 3 random as fallback (crypto used to generate a random 4 character string)
for (let x = 0; x < 2; x++) {
slugs.push(slug + "(" + crypto.randomBytes(2).toString("hex") + ")");
}
// Uses a single find instance for performance purposes
// $in searches for all collections with a slug in slugs array above
const query: any = { slug: { $in: slugs } };
Collection.find(query, { slug: true }).then((results) => {
if (results) {
results.forEach((result) => {
slugs.every((s, si) => {
// If match found, remove from slugs since that slug ID is not valid.
if (s === result.slug) { slugs.splice(si, 1); return false; } else { return true; }
});
});
}
// returns first slug. Slugs are ordered by priority
if (slugs.length > 0) { resolve(slugs[0]); } else {
reject("Unable to generate a unique slug."); // Note: If no slug, then fails. Can use ObjectID as failsafe (nearly impossible).
}
}, (err) => {
reject("Find failed");
});
});
}
Related
I'm new to Apollo Client and I'm trying to wrap my head around field policies to implement pagination.
So basically I have a category page where I perform a query that is based on the the slug that I receive from the URL of the page, returns a list of IDs (and I pass them down as props for the product component), for example:
query getProductId($slug: String!) {
slug(where: {slug: $slug}){
products {
Id
}
}
}
from this query I get and array of all the objects containing the IDs of the products.
I can pass a "first: " and "after: {id: }" to the products field and this way I could decide after which product ID I want to query. for example:
query getProductId($slug: String!) {
slug(where: {slug: $slug}){
products(first: 4, after: {id: 19}) {
Id
}
}
}
I know that in my ApolloClient instance I can define a field policy for the cache like this:
const apollo = new ApolloClient({
//...
cache: new InMemoryClient({
typePolicies: {
Query: {
fields: {
products: offsetLimitPagination(["<* keyArgs>"]),
},
},
},
})
})
This is just one random helper function I took, but in my case I think using a cursor based strategy is better since I could use the last ID in the list as cursor, I guess(?)
From here I'm completely lost, the more I read the docs the more I get confused.
{
keyArgs: ["first"],
merge(existing, incoming, { args: { cursor }, readField }) {
const merged = existing ? existing.slice(0) : [];
let offset = offsetFromCursor(merged, cursor, readField);
// If we couldn't find the cursor, default to appending to
// the end of the list, so we don't lose any data.
if (offset < 0) offset = merged.length;
// Now that we have a reliable offset, the rest of this logic
// is the same as in offsetLimitPagination.
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
// // If you always want to return the whole list, you can omit
// // this read function.
// read(
// existing,
// { args: { cursor, limit = existing.length }, readField }
// ) {
// if (existing) {
// let offset = offsetFromCursor(existing, cursor, readField);
// // If we couldn't find the cursor, default to reading the
// // entire list.
// if (offset < 0) offset = 0;
// return existing.slice(offset, offset + limit);
// }
// },
},
},
},
},
}),
});
function offsetFromCursor(items, cursor, readField) {
// Search from the back of the list because the cursor we're
// looking for is typically the ID of the last item.
for (let i = items.length - 1; i >= 0; --i) {
const item = items[i];
// Using readField works for both non-normalized objects
// (returning item.id) and normalized references (returning
// the id field from the referenced entity object), so it's
// a good idea to use readField when you're not sure what
// kind of elements you're dealing with.
if (readField("id", item) === cursor) {
// Add one because the cursor identifies the item just
// before the first item in the page we care about.
return i + 1;
}
}
// Report that the cursor could not be found.
return -1;
}
Let's suppose I use this a field policy for the list of products, how do I go from here? I'm completely lost
I have a partner record where I would like to change the form if the category field is set to a certain value. However, I can't use this with certain SuiteScript functions because changing the form wipes out any changes that were made to the record. I'm trying to work around this using an afterSubmit function that will use record.SubmitFields to change the form and then redirect.toRecord to reload the page with the change. However, it's not changing the form value. Is there a way to do this with record.submitFields? Am I doing something incorrectly?
var currentRecord = scriptContext.newRecord;
var category = currentRecord.getValue('category');
if(category == '3'){
try{
record.submitFields({
type: record.Type.PARTNER,
id: currentRecord.id,
values: {
'customform': '105'
}
});
log.debug('success');
} catch (e) {
log.error({title: 'error', details: e});
}
}
redirect.toRecord({
type: 'partner',
id: currentRecord.id,
});
}
Yes you can. Whenever you create a url for a record you can generally add a cf parameter that takes the form id. It's the same vaule you'd use if you were setting the field 'customform'. So just skip the submitFields part and do:
redirect.toRecord({
type: 'partner',
id: currentRecord.id,
parameters:{
cf:105
}
});
You can also set the custom form using the submitFields call but that only works for some types of records.
If you need to do this in the beforeLoad here is a fragment in Typescript. The trick to avoid an infinite loop is to check to see if you already have the correct form:
export function beforeLoad(ctx){
let rec : record.Record = ctx.newRecord;
let user = runtime.getCurrentUser();
if(user.roleCenter =='EMPLOYEE'){
if(rec.getValue({fieldId:'assigned'}) != user.id){
throw new Error('You do not have access to this record');
return;
}
}else{
log.debug({
title:'Access for '+ user.entityid,
details:user.roleCenter
});
}
if(ctx.type == ctx.UserEventType.EDIT){
var approvalForm = runtime.getCurrentScript().getParameter({name:'custscript_kotn_approval_form'});
let rec : record.Record = ctx.newRecord;
if( 3 == rec.getValue({fieldId:'custevent_kotn_approval_status'})){
if(approvalForm != rec.getValue({fieldId:'customform'}) && approvalForm != ctx.request.parameters.cf){
redirect.toRecord({
type: <string>rec.type,
id : ''+rec.id,
isEditMode:true,
parameters :{
cf:approvalForm
}
});
return;
}
}
}
I initialize my DB in the usual way:
mongoose.connect(`mongodb://uname:pword#127.0.0.1:port/dbname?authSource=admin`, {useNewUrlParser: true, autoIndex: false});
And I have a Schema, something like:
var materialSchema = new Schema({
bookID: {type: String, required: true},
active: Boolean,
name: {type: String, required: true},
stockLength: {type: Number, required: true}
});
module.exports = mongoose.model('material', materialSchema);
When I create a new material and add it to the database, it is automatically assigned the usual _id - which is a behaviour I want to maintain. BUT, I'd also like for bookID to be a unique, auto-incrementing index. This is for physical shelf storage, and not for queries or anything like that.
I'd like for bookID to increment in the following way:
A-001
A-002
A-003
...
A-098
A-099
A-100
B-001
...
B-100
...
Z-001
...
Z-100
In case the pattern above isn't clear, the pattern starts at A-001 and ultimately ends at Z-100. Each letter goes from 001 through 100 before moving to the next letter. Each new collection entry is just the next ID in the pattern. It is unlikely that the end will ever be reached, but we'll cross that bridge when we get there.
I've only ever used the default _id for indexing, and can't figure out how to make this pattern.
Thanks for any insight!
Edit #1
The best solution I've come up with so far is to have a separate .txt file with all of the IDs listed in order. As each new object is created, pop (... shift) the next ID off the top of the file. This might also have the added benefit of easily adding additional IDs at a later date. This will probably be the approach I take, but I'm still interested in the mongoose solution requested above.
Edit #2
So I think the solution I'm going to use is a little different. Basically, findOne sorted by bookID descending. Then use the value returned to set the next.
Material.findOne()
.sort({bookID : -1})
.exec((err, mat) => {
if(err) {
// Send error
}else if(!mat) {
// First bookID
}else {
// Indexes exist...
let nextId = getNextID(mat.bookID);
// ...
}
});
Still easy to modify getNextID() to add new/different IDs in the future (if/when "Z100" is reached)
Thanks again!
Ok, so to expand a little bit on Edit #2, I've come up with the following solution.
Within the model (schema) file, we add a schema pre() middleware, that executes when .save() is called, before the save occurs:
// An arrow function will not work on this guy, if you want to use the "this" keyword
materialSchema.pre('save', function(next) {
this.model('material').findOne() // Don't forget the .model(...) bit!
.sort({bookID : -1}) // All I need is the highest (i.e. most recent) bookID
.select('bookID') // Ditto above (not really necessary)
.exec((err, result) => {
if(err) {
return next(err); // Oopsies, an error!
}else if(!result) {
this.bookID = 'A-001'; // The case when collection is empty
}else {
this.bookID = getNextID(result.bookID); // Otherwise, increment ID
}
next(); // Don't forget this sucker! This is how you save
});
});
And that's about it! It isn't an in-built solution direct from Mongoose, but it works a treat.
Just for completeness, the getNextID function looks like:
function getNextID(curID) {
let letter = curID.split('-')[0];
let number = parseInt(curID.split('-')[1]);
if(number >= 100) { // Increase the letter and reset the number
letter = String.fromCharCode(letter.charCodeAt(0) + 1)
number = '001';
}else { // Only increase the number
number = ('' + (number + 1)).padStart(3, '0'); // Makes sure the numbers are always 3 digits long
}
return `${letter}-${number}`;
}
This'll do just dandy for now. Until we get to Z100. But I'll cross that bridge if/when it comes. No big deal at all.
And you don't need to do anything special to use it. Just save a new doc as normal, and it automatically fires:
new Material({
// New material properties
}).save((err, mat) => {
// Handle errors and returns ...
});
I am using NodeJS, PostgreSQL and the amazing pg-promise library. In my case, I want to execute three main queries:
Insert one tweet in the table 'tweets'.
In case there is hashtags in the tweet, insert them into another table 'hashtags'
Them link both tweet and hashtag in a third table 'hashtagmap' (many to many relational table)
Here is a sample of the request's body (JSON):
{
"id":"12344444",
"created_at":"1999-01-08 04:05:06 -8:00",
"userid":"#postman",
"tweet":"This is the first test from postman!",
"coordinates":"",
"favorite_count":"0",
"retweet_count":"2",
"hashtags":{
"0":{
"name":"test",
"relevancetraffic":"f",
"relevancedisaster":"f"
},
"1":{
"name":"postman",
"relevancetraffic":"f",
"relevancedisaster":"f"
},
"2":{
"name":"bestApp",
"relevancetraffic":"f",
"relevancedisaster":"f"
}
}
All the fields above should be included in the table "tweets" besides hashtags, that in turn should be included in the table "hashtags".
Here is the code I am using based on Nested transactions from pg-promise docs inside a NodeJS module. I guess I need nested transactions because I need to know both tweet_id and hashtag_id in order to link them in the hashtagmap table.
// Columns
var tweetCols = ['id','created_at','userid','tweet','coordinates','favorite_count','retweet_count'];
var hashtagCols = ['name','relevancetraffic','relevancedisaster'];
//pgp Column Sets
var cs_tweets = new pgp.helpers.ColumnSet(tweetCols, {table: 'tweets'});
var cs_hashtags = new pgp.helpers.ColumnSet(hashtagCols, {table:'hashtags'});
return{
// Transactions
add: body =>
rep.tx(t => {
return t.one(pgp.helpers.insert(body,cs_tweets)+" ON CONFLICT(id) DO UPDATE SET coordinates = "+body.coordinates+" RETURNING id")
.then(tweet => {
var queries = [];
for(var i = 0; i < body.hashtags.length; i++){
queries.push(
t.tx(t1 => {
return t1.one(pgp.helpers.insert(body.hashtags[i],cs_hashtags) + "ON CONFLICT(name) DO UPDATE SET fool ='f' RETURNING id")
.then(hash =>{
t1.tx(t2 =>{
return t2.none("INSERT INTO hashtagmap(tweetid,hashtagid) VALUES("+tweet.id+","+hash.id+") ON CONFLICT DO NOTHING");
});
});
}));
}
return t.batch(queries);
});
})
}
The problem is with this code I am being able to successfully insert the tweet but nothing happens then. I cannot insert the hashtags nor link the hashtag to the tweets.
Sorry but I am new to coding so I guess I didn't understood how to properly return from the transaction and how to perform this simple task. Hope you can help me.
Thank you in advance.
Jean
Improving on Jean Phelippe's own answer:
// Columns
var tweetCols = ['id', 'created_at', 'userid', 'tweet', 'coordinates', 'favorite_count', 'retweet_count'];
var hashtagCols = ['name', 'relevancetraffic', 'relevancedisaster'];
//pgp Column Sets
var cs_tweets = new pgp.helpers.ColumnSet(tweetCols, {table: 'tweets'});
var cs_hashtags = new pgp.helpers.ColumnSet(hashtagCols, {table: 'hashtags'});
return {
/* Tweets */
// Add a new tweet and update the corresponding hash tags
add: body =>
db.tx(t => {
return t.one(pgp.helpers.insert(body, cs_tweets) + ' ON CONFLICT(id) DO UPDATE SET coordinates = ' + body.coordinates + ' RETURNING id')
.then(tweet => {
var queries = Object.keys(body.hashtags).map((_, idx) => {
return t.one(pgp.helpers.insert(body.hashtags[i], cs_hashtags) + 'ON CONFLICT(name) DO UPDATE SET fool = $1 RETURNING id', 'f')
.then(hash => {
return t.none('INSERT INTO hashtagmap(tweetid, hashtagid) VALUES($1, $2) ON CONFLICT DO NOTHING', [+tweet.id, +hash.id]);
});
});
return t.batch(queries);
});
})
.then(data => {
// transaction was committed;
// data = [null, null,...] as per t.none('INSERT INTO hashtagmap...
})
.catch(error => {
// transaction rolled back
})
},
NOTES:
As per my notes earlier, you must chain all queries, or else you will end up with loose promises
Stay away from nested transactions, unless you understand exactly how they work in PostgreSQL (read this, and specifically the Limitations section).
Avoid manual query formatting, it is not safe, always rely on the library's query formatting.
Unless you are passing the result of transaction somewhere else, you should at least provide the .catch handler.
P.S. For the syntax like +tweet.id, it is the same as parseInt(tweet.id), just shorter, in case those are strings ;)
For those who will face similar problem, I will post the answer.
Firstly, my mistakes:
In the for loop : body.hashtag.length doesn't exist because I am dealing with an object (very basic mistake here). Changed to Object.keys(body.hashtags).length
Why using so many transactions? Following the answer by vitaly-t in: Interdependent Transactions with pg-promise I removed the extra transactions. It's not yet clear for me how you can open one transaction and use the result of one query into another in the same transaction.
Here is the final code:
// Columns
var tweetCols = ['id','created_at','userid','tweet','coordinates','favorite_count','retweet_count'];
var hashtagCols = ['name','relevancetraffic','relevancedisaster'];
//pgp Column Sets
var cs_tweets = new pgp.helpers.ColumnSet(tweetCols, {table: 'tweets'});
var cs_hashtags = new pgp.helpers.ColumnSet(hashtagCols, {table:'hashtags'});
return {
/* Tweets */
// Add a new tweet and update the corresponding hashtags
add: body =>
rep.tx(t => {
return t.one(pgp.helpers.insert(body,cs_tweets)+" ON CONFLICT(id) DO UPDATE SET coordinates = "+body.coordinates+" RETURNING id")
.then(tweet => {
var queries = [];
for(var i = 0; i < Object.keys(body.hashtags).length; i++){
queries.push(
t.one(pgp.helpers.insert(body.hashtags[i],cs_hashtags) + "ON CONFLICT(name) DO UPDATE SET fool ='f' RETURNING id")
.then(hash =>{
t.none("INSERT INTO hashtagmap(tweetid,hashtagid) VALUES("+tweet.id+","+hash.id+") ON CONFLICT DO NOTHING");
})
);
}
return t.batch(queries);
});
}),
I have following models:
Question Model
var OptionSchema = new Schema({
correct : {type:Boolean, default:false}
value : String
});
var QuestionSchema = new Schema({
value : String
, choices : [OptionSchema]
, quiz : {type:ObjectId, ref:'quizzes'}
, createdOn : {type:Date, default:Date.now}
...
});
var Question = mongoose.model('questions', QuestionSchema);
Quiz Model
var QuizSchema = new Schema({
name : String
, questions : [{type:ObjectId, ref:'questions'}]
,company : {type:ObjectId, ref:'companies'}
...
});
var Quiz = mongoose.model('quizzes', QuizSchema);
Company Model
var CompanySchema = new Schema({
name :String
...
});
I want to shuffle choices of each question per each query, and I am doing It as follows :
shuffle = function(v){
//+ Jonas Raoni Soares Silva
//# http://jsfromhell.com/array/shuffle [rev. #1]
for(var j, x, i = v.length; i; j = parseInt(Math.random() * i), x = v[--i], v[i] = v[j], v[j] = x);
return v;
};
app.get('/api/companies/:companyId/quizzes', function(req, res){
var Query = Quiz.find({company:req.params.companyId});
Query.populate('questions');
Query.exec(function(err, docs){
docs.forEach(function(doc) {
doc.questions.forEach(function(question) {
question.choices = shuffle(question.choices);
})
});
res.json(docs);
});
});
My Question is :
Could I randomize the choices array without looping through all documents as now I am doing?
shuffle = function(v){
//+ Jonas Raoni Soares Silva
//# http://jsfromhell.com/array/shuffle [rev. #1]
for(var j, x, i = v.length; i; j = parseInt(Math.random() * i), x = v[--i], v[i] = v[j], v[j] = x);
return v;
};
app.get('/api/companies/:companyId/quizzes', function(req, res){
var Query = Quiz.find({company:req.params.companyId});
Query.populate('questions');
Query.exec(function(err, docs){
var raw = docs.toObject();
//shuffle choices
raw.questions.map(el => shuffle(el.choices))
//if you need to shuffle the questions too
shuffle(raw.questions);
//if you need to limit the output questions, especially when ouput questions needs to be a subset of a pool of questions
raw.questions.splice(limit);
res.json(raw); // output quiz with shuffled questions and answers
});
});
The essence of the question comes down to "Can I randomly shuffle results and have MongoDB do the work for me?". Well yes you can, but the important thing to remember here is that "populate" is not longer going to be your friend in helping you do so and you will need to perform the work that is doing yourself.
The short part of this is we are going to "hand-off" your client side "shuffle" to mapReduce in order to process the shuffling of the "choices" on the server. Just for kicks, I'm adding in a technique to shuffle your "questions" as well:
var Query = Quiz.findOne({ company: "5382a58bb7ea27c9301aa9df" });
Query.populate('company', 'name -_id');
Query.exec(function(err,quiz) {
var shuffle = function(v) {
for(var j, x, i = v.length; i; j = parseInt(Math.random() * i), x = v[--i], v[i] = v[j], v[j] = x);
};
if (err)
throw err;
var raw = quiz.toObject();
shuffle( raw.questions );
Question.mapReduce(
{
map: function() {
shuffle( this.choices );
var found = -1;
for ( var n=0; n<inputs.length; n++ ) {
if ( this._id.toString() == inputs[n].toString() ) {
found = n;
break;
}
}
emit( found, this );
},
reduce: function() {},
scope: { inputs: raw.questions, shuffle: shuffle },
query: { "_id": { "$in": raw.questions } }
},
function(err,results) {
if (err)
throw err;
raw.questions = results.map(function(x) {
return x.value;
});
console.log( JSON.stringify( raw, undefined, 4 ) );
}
);
});
So the essential part of this is rather than allowing "populate" to pull all the related question information into your schema object, you are doing a manual replacement using mapReduce.
Note that the "schema document" must be converted to a plain object which is done by the .toObject() call in there in order to allow us to replace "questions" with something that would not match the schema type.
We give mapReduce a query to select the required questions from the model by simply passing in the "questions" array as an argument to match on _id. Really nothing directly different to what "populate" does for you behind the scenes, it's just that we are going to handle the "merge" manually.
The "shuffle" function is now executed on the server, which since it was declared as a var we can easily pass in via the "scope", and the "options" array will be shuffled before it is emitted, and eventually returned.
The other optional as I said was that we are also "shuffling" the questions, which is merely done by calling "shuffle" on just the _id values of the "questions" array and then passing this into the "scope". Noting that this is also passed to the query via $in but that alone does not guarantee the return order.
The trick employed here is that mapReduce at the "map" stage, must "emit" all keys in their ascending order to later stages. So by comparing the current _id value to where it's position is as an index value of the "inputs" array from scope then there is a positional order that can be emitted as the "key" value here to respect the order of the shuffle done already.
The "merging" then is quite simple as we just replace the "questions" array with the values returned from the mapReduce. There is a little help here from the .map() Array function here to clean up the results from the way mapReduce returns things.
Aside from the fact that your "options" are now actually shuffled on the server rather than through a loop, this should give you ideas of how to "custom populate" for other functions such as "slicing" and "paging" the array of referenced "questions" if that is something else you might want to look at.