nodejs notification system - node.js

I'm doing a notification system for my website.
A notification systeme like facebook. Or stackoverflow.
I have 2 problems.
How store in db ? I can store ALL notifications in the user document ? or in a document apart (because i think monogdb is limited for size in a document ?) Or, store intelligently ? (using inc, or a value (see: true/false) in db, with query sophisticated)
How do for brought at the page ? For exemple, when i click in a link in my inbox for stackoverflow, i'm redirect to the page. But me, i have a system that is multipage for exemple: I have 100 friends. There are listed 30 per page. So when i click on the notification i can't redirect to the because it's impossible to know the good page (users can be removed).
Thank you very much !
And if you have another ideas, tell me. Thanks.
EDIT:
(sorry for my english, i'm french)
For the first problem, i realize that i have to wait the time comes to choose my structure. Because my notification is .. a little complicated, so advance to the feeling.
For the second, i solved the problem. I explain:
(I take the exemple of friends because it's easy to undestand.)
I stored my data like this:
{
friends: [
{_id: xxxxx, ts: xxxx},
{_id: xxxxx, ts: xxxx}
]
}
Imagine i display all friends: 30 per page.
The problems are:
when i want to display all friends i cant sort using mongo. (a little problem)
If i want to lead a user to this list (30 per page) at a special friend, always keeping the sort by ts. I can't know the page. The uniq solution is to take all document.
But: veryyy bad in performance.
So, i store like this:
{
friends: {
xxxx: {ts:xxx},
xxxx: {ts:xxx}
}
}
Know i can sort the document, with use skip and limit.
So if i want a portion, i do not need to take all documents.
To know the page, i just do the number of < or > to the ts, i have for exemple 11 friends who are > to the ts of the friends that i want, and do a count for all friends (ex: 50 friends) with 50 and 11, i can guess the page.
Is this solution is good ?
- i need a count
- a query to know the number of > or <
and i can take the page where is listed the friend, keeping the sort ts
You can don't understand why i use a count. I need because they are not store in the same docment.
2 EDIT:
The problem with this solution, is that i need to make query object and update object outside of the mongo query (ex: for do friends.xxxxxx: {$exists:true}
ps: And what advantages are to use ts instead of date for mongodb ?
I'm using ts but i think i will store date, and no ts.
3 EDIT
I will do like Sammaye. Store in separate document. Take a look at: http://mongly.com/Multiple-Collections-Versus-Embedded-Documents/#1 and http://openmymind.net/2012/1/30/MongoDB-Embedded-Documents-vs-Multiple-Collections/

#Stennie make a pretty complete answer.
However recently I did a similar thing in PHP for my website. The first thing to understand is whether you are doing a notification system or a wall (the two are very different), it seems unclear to me and I am not sure what you mean by:
How do for brought at the page ? For exemple, when i click in a link
in my inbox for stackoverflow, i'm redirect to the page. But me, i
have a system that is multipage for exemple: I have 100 friends. There
are listed 30 per page. So when i click on the notification i can't
redirect to the because it's impossible to know the good page (users
can be removed).
That is not very good English and is very confusing when I read it. If you can expand on that I am sure people can answer better.
For a notification system I found that a large collection of notification objects also worked. So I had a schema like:
{
_id: {},
to_user: ObjectId{},
user_id: ObjectId{}, // Originating user
custom_text: "has posted a new comment on your wall post",
read: false,
ts: MongoDate()
}
And this would literally be the document I have to produce notifications. Each time a user commits an action that generates a notification it writes a new row to the DB with to_user being populated each time with each user needing to be notified. As for multiple users commiting the same action I actually convert the user_id field in a list of OjbectId's so I can say:
Sam, Dan and Mike all commented on your wall post
I then query by ts storing the last ts that the user looked at in their row allowing me to do a range based query on the newest notifications each time. This works quite well for sharding and querying in my personal experience.
Hope it helps,

Whether to embed or link is a common question for data modelling in MongoDB. If your number of notifications is going to be unbounded, you are likely going to be better saving these in a separate collection.
The current 16Mb document limit actually isn't as much as an issue as some other considerations:
A performance issue you may encounter by including all notifications in a single document is that fast-growing documents may also need to be relocated in the database more frequently (see Padding Factor).
You may want to be applying multiple updates to a document (such as setting a "read" flag on notifications) in a very short period of time, which means more contention for updating the same document (see Atomic Operations).
In order to implement paging you can use limit() in combination with a range query or skip(). A range query (eg. based on an indexed notificationDate) will make more effective use of indexes and perform better than skip() as your collection grows.

Related

SharePoint view limitation

I have a document library in SharePoint online. I keep on dumping the records into it. As SharePoint have a 5000 record view limitation the moment it reaches that limit, still I will be able to upload documents but it doesn't show up any where.
Eventually I end up creating a new view and apply a filter and then the document starts showing up under the new view.
My question here is: Is there a way to automatically create a view when it reaches the 5000 limitation and put the newly uploaded documents to the new view.
Yes, you can do this via MS Flow/Workflow & server side apps/scripts of course but it's not a good approach to the issue IMO.
Have you indexed the columns? I just tested this now on a document library with 20k documents and I'm able to filter. There are limitations which you should look into (complex filtering), that's where compound indexes come in.
If you still have issues then I recommend you give the highlighted content web-part a try. You can create custom search queries & it looks similar to a document library if u set the settings correctly. The only meh thing about this approach is there is a delay for search to update, from 15 mins to 6 hours depending on how much data you have

XPages typeAhead does not work If you have so many documents

TypeAhead works fine If you don't have so many documents. If i delete lots of them typeAhead works. I think there is a limitation #DbColumn() in typeahead option.
How to solve this problem? It's like a 64k size problem but any suggestion is important
Thanks in advance
C.A.
Are you using #DBColumn() or #DBLookup() to populate your typeahead? They do have 64K limits. (I'm not sure as I read your question, so I ask for clarification).
If so, you might want to consider looking at links like this Can typeahead results be returned from a java function,
I've recently done one with a massive number of documents (millions). I used this, but since it was taking a lot of time to return, I changed it to get the first selected entry in the view (based on the value from the AJAX typeahead), created a ViewNavigator from that entry, and used the setBufferMaxEntires property to restrict the size of the returned ViewNavigator. This lets the process go quite fast.
Brian
UPDATE:
As requested. I started with using results like I linked above, then I added
ViewEntry startEntry = canQLView.getEntryByKey(searchValue, false);
allObjects.addElement(startEntry);
if (startEntry != null) {
ViewNavigator matchingEntries = canQLView.createViewNavFrom(startEntry);
matchingEntries.setBufferMaxEntries(10);
ViewEntry entry = matchingEntries.getFirst();
You can see I get a single entry rather than a ViewEntryCollection, start my ViewNavigator from that entry, and the setBufferMaxEntries property restricts how much is fetched - you can change it, but a low number is sensible since it's a typeahead.
Cheers,
Brian
Correct, there is the 64Kb limit on #DbColumn(), it's not an XPages-specific limitation and various blog posts confirm it. Ensure typeahead is only provided once the user has entered an appropriate number of characters that can restrict. After all, the typeahead should return a small number of entries for the user to select from, otherwise it's not much use. Then use #DbLookup with "[PARTIALMATCH]" or getAllEntriesbyKey. There are a number of blog posts available that give code samples.

viewJsonService returning too many entries to dataGrid

I've set up an ExtLib REST service as "xe:viewJsonService" and connected it to a domino view. Currently the view contains 63 entries. The documents behind those entries have read access restrictions.
The Json returned by the service is consumed by a Dojo Data Grid (taken from the ExtLib libraries).
The page is accessed by a test user having read access to only one of the 64 entries. This user however sees a data grid containing a single data element, followed by 63 empty entries, like this:
Looking at the raw Json data I see that the service indeed is only returning a single entry, but it knows that there are 63 siblings:
[
{
"#entryid":"1-6C5763E4A122F1D3C1257EC700355386",
"#unid":"6C5763E4A122F1D3C1257EC700355386",
"#noteid":"3FD2E",
"#position":"1",
"#read":true,
"#siblings":63,
"#form":"fInvoice",
"colIconStatus":"imgInvExported.gif",
"colIconLock":"blank.gif",
"invInvoiceDate":"2015-09-21T09:44:27Z",
"invJobInvNumbers":"111\/5152\/52567\/ 001",
"invSupplierNameShort":"My Test Company GmbH",
"invAmount":121.5
}
]
Technically speaking this is correct as the service has access to all 64 entries. Problem is that the data grid is making space for 64 entries instead of only one.
Question is: how can I tell the data grid the correct amount of data to be displayed? Or do I need to manipulate the REST service instead?
EDIT: continuing my search for a possible solution in meanwhile found a few other related questions this one by Eric McCormick (including a very good approach by Stephan Wissel), or this one by Steve Zavocki. So my question would be a duplicate, really... (sorry for that)
Caveat: please read down to the bottom of this answer as you might run into unexpected ussues!
Finally after some playing around I just stumbled upon an obscure property that seems to help, for whatever reason (I'll be making this a new question):
the property globalValues appears to be available for service types xe:documentJsonService, xe:viewItemFileService, xe:viewJsonLegacyService, xe:viewJsonService and xe:viewXmlLegacyService. this property has three fixed options called Entries (= 0x0001), Top Level (= 0x0002) and Timestamp (= 0x0004). Just by playing the goold old "trial-and-error" game I found that setting this property to 1 (= Entries) modifies / filters the resulting data:
by default the raw JSON returned by xe:viewItemFileService looks like this:
{
"#timestamp":"2015-10-14T12:57:59Z",
"#toplevelentries":63,
"items":
[
{
...
}
]
}
Setting globalValues to "1" removes the #timestamp and #toplevelentries fields from the output:
{
"items":
[
{
...
}
]
}
And, more importantly, this also removes the empty rows from my data grid!
There's only one thing that's making me nervous and that is that I can't find any explanation at all in regards to that property. So I really don't have a clue whether there are any unwanted side effects...
Update: thanks to Knut Herrmann I did some more testing on this (see comments below this answer). In my test case there are over 13,000 documents in my view; as long as my test user can only read a small amount of those everything seems to be fine. Then I added 200 more documents to the read-enabled list. Result is a data-grid that constantly has to recalculate its scroll bar: the further down I'm scrolling the smaller the scroll handle gets. As soon as I reach the bottom line however the grid goes berzerk and decides to only display the first 13 (?!?) rows, and also the scroll bar is removed alltogether. Performance isn't as bad as I expected, though.
So I have to agree with Knut that this isn't such a good solution for the combination of large views with a large subset of accessible entries!
Lothar,
I have experienced this before as you pointed out. I believe the answer is to use the 'keys' property to filter out the invalid entries.
I am not sure about how your application is structured, but if the user can only see certain entries in the view, I would consider categorizing by user, and then use the keys to show them only the rows in which they have access.
You asked if you can change the dojo grid to exclude the entries. I think the answer there is no. Your options are to filter via the REST service or via the Notes view.
Here is a related blog post that I wrote on the issues I was having. http://notesspeak.blogspot.com/2013/07/creating-updatable-rest-service-for-use.html
EDIT 2 Additional Things to Try
1) Did you see the comment on my blog post? I haven't tried it myself. Credit goes to blog comment-er "Goo Goo".
"I use this code in onstyleRow event of the grid to solve the blank rows issue "
which using viewJsonService:
var row = arguments[0];
var rowItem = djxDataGrid1.getItem(row.index);
var rowCount = Object.keys(restService1._index).length - 1; //-1 for omit the onUpdate event
if(row.index >= rowCount){
row.customStyles += 'display:none;';
}
2) What I personally did to fix the issue is in this SO answer: How to configure an xe:viewFileItemService on an XPage to filter the data in a categorized view?
Given what you said about your view structure, I am not sure that this will apply to you.

CouchDB a real world example

Tonight in my daily tech Googling I came across couchDB, after seeing tons of presentations about how it perform ten to hundred times better then any RDBM, how it would save us from SQL languages, tables, primary keys and so much more. I decided myself to try it myself. Only problem it seems I am unable to figure out how it works.
Like for a start I would like to code a web contact manager using couchDB. The project would enable user to do basic stuff like
Create/ Edit / Delete contacts
see a list of their contact ordered
search them on various criteria
So how do I start ?
Here some of my thoughts
create a database per user like July, Ann
in those DB, add some document with type contact, the document would look like this at first place see code 1
create / edit / delete is straight forward just need to do the PUT, POST, DELETE in the good database
searching would be handled by couchdb-lucene like dnolen suggested
now here come the difficult part, I don't really understand the whole map/reduce concept and how I can use that to do the jobs I used to do with SQL. Also with views how do you handle paging, also grouping.
I would like to build a screen with a paging set of links something like this
John, Doe
Johny, Hallyday
Jon, Skeet
A B C D E F **J** etc .... <-- those are link to see persons with that first name
what view should I create to achieve that, if you can provide samples it would wonderful.
Contact document.
{
type: 'contact',
firstname: 'firstname',
lastname: 'lastname',
email: ['home': 'foobar#foobar.net', 'work': 'foobar#foobar-working.net'],
phone: ['home': '+81 00 0000 0000'],
address: []
... some other fields maybe ...
}
The upcoming book by O'Reilly is free to read online:
http://books.couchdb.org/relax/
Just install and play around - you can do straight http requests using curl on the command line, or use the built-in web interface called futon.
Storing and retrieving data is really easy, the hardest part is thinking in terms of map/reduce-views instead of sql queries.
IBM has a great tutorial, making use of curl to read/write via the REST interface.
Your application is quite easy to do with CouchDB. You would have a database per user. Contacts are simply documents in a particular user's database. CRUD is just talking to the database using HTTP. You could create views that emit keys (last name, first name) to allow for sorting.
For powerful search I would recommend couchdb-lucene.

SharePoint : Query list items added/ updated after user's last visit

We need to fetch the items added/updated after the user's last visit.
We need this information from 3 separate lists under the same web.
Pointers on how to accomplish this would be very helpful (and does SharePoint provide any API for this).
Kind regards,
Filtering by modified date is straightforward enough, though the method will depend on the type of view - the tricky part is getting the last login time - you're probably going to need a bit of custom code to save that.
Brute force would be to run a foreach on every version until you reach a version before the users last login date, and do this on every list item, and then again on every list. You can see which fields changed this way by seeing what changed between versions. You can narrow down the the set of items to do this on by only querying for ones with a modified date since the users last login
As for finding the users last login, sorry I can suggest anything for that. I've not looked for it before.

Resources