How Does the Single Store Xpages work? - xpages

I have a number of XPAges design elements that I use in many different databases. If I read the wiki correctly the single store is an all or nothing situation.
So I want to create unique design in a database but use the set of reusable XPages element from a single store location. the wiki says:
Apart from the "dummy or blank XPage with the same name of the default XPage" in each instance application, does it matter if an 'instance' contains XPage design elements?
No. If SCXD is set on an application all XPages design elements are ignored on the database and the application uses the design elements on the SCXD database.
If this is the case then I have to create databases where probably 75% of the code is reusable but I would have to repeat it (and maintain it) in dozens of separate databases. pity!

XPages and related elements (Custom Controls, SSJS Libraries, Java Code) can be inherited from a specific template like other design elements. So, I would setup a database called, perhaps, "Core Components" (.ntf or .nsf) with a template name of "CoreComponents". Then on the individual elements in the target DB you would set inheritance to be specifically from the "CoreComponents" template. Then the elements that are unique to each database do not inherit from any template. You can then use File-Application-Refresh design to update the elements with specific inheritance and the one which are unique in that database will not get overwritten.
You do need to do a clean build after the refresh, so I recommend that you keep the Core Components database locally or on a different server than the others so that the daily design task will not update them resulting in corrupted xsp elements.

IBM's preferred model for reusing XPage artifacts across multiple applications is to create OSGi plugins that leverage the XPages Extensibility API.
NotesIn9 episode 64 demonstrates how to make an existing Custom Control design element a library component, which can then be used in any app that has the library available, instead of having to copy the design element to each app separately. Any subsequent changes to that component are then applied immediately to any apps that use it when a new version of the library is deployed.
If you truly have "dozens" of apps that all share certain features, but the entire design should not be identical across all of them, then the OSGi model is definitely the way to go.
But why not flip the entire model on its head? Traditionally, we've always put the code and the data in the same place (e.g. same NSF) because it was a pain to access -- and, especially, visually represent -- data in one NSF via code in another NSF. That's not true anymore. Why have dozens of apps just because the data lives in dozens of places? Any data source in XPages can be told where the data lives... you can link a central user interface to any number of "remote" data stores (either different NSFs on the same server, or even databases on other servers).
Red Pill, for instance, takes this to its logical extreme: they deploy one NSF, which acts as a portal to all your data, no matter where that data lives. The ACLs of the various NSFs (and Readers fields) still ensure that users don't pry into data they haven't been granted access to, and they have complex analytics algorithms for determining which data the users will actually care about. But if you have 500 NSFs in the domain, you're not maintaining 500 different code templates... it's literally just 1; but that one user interface is how users find, and interact with, all their data.
You certainly don't have to take this premise to that extreme, but perhaps you could identify, say, 5 apps where the UI and / or business logic is similar (or even identical), but the data just lives in multiple places. Create one central app for interacting with all of that data. Create a "homepage" that gives users a way to select which "app" they're trying to access (or, if they should only have access to one to begin with, compute which one that is), and then once they navigate in to the specific "app", just bind the data sources to the relevant NSF instead of assuming each view or document lives in the same NSF that the code does.
It's still a good idea to be aware of the Extensibility API, not only for the sake of code reusability, but also to understand just how much of the behavior of the platform truly is within our control now -- provided, of course, that we're willing to occasionally write some custom Java code. But if you shift away from the one-to-one mapping between code and data that we've habitually maintained in Domino for so long, I can practically guarantee that you'll prefer this approach... both for the ease of implementation and maintenance, and for the comparative simplicity it offers to end users.

You can combine the template technique and the all-code-in-one-database approach:
Divide the application design into two parts: a data part and a code part.
The data part contains all Notes views. If it's an classic Notes application it would contain also all design elements for Notes client like Forms, Subforms, Frames and so on.
The code part contains all XPages, Custom Controls, CSS, client/server JavaScript libraries, Themes, images, jars and so on.
Put your 75% common code into masterData.ntf and masterCode.ntf.
The application code databases appCodeX.ntf inherit all design elements of masterCode.ntf and contain the additional application specific design elements.
The code from all application templates gets united in allCode.ntf. It inherits all from masterCode.ntf and inherits the additional pieces of code from application templates.
Based on that you create an allCode.nsf.
On the data side you use the classic template way.
From here you have to possibilities:
You use Single Copy XPage Design - connect every appData database with allCode.nsf
You connect your XPages in allCode.nsf with appData databases
I prefer the latter. You can define in allCode.nsf where all the application data databases are located, e.g. in property documents.
With the approach showed in picture you're still able to separate application easily e.g. in case you want to sell them. You have already a separate template for every single application.

Related

REST Services or Repeat Control with Java Object in a Data Table

Over the past couple of years or so I have revamped most of my Notes Applications for XPages and of late made extensive use of Java objects in Repeat Controls etc.
I am now implementing, where appropriate, jQuery DataTables in an attempt to generate the same functionality as Notes Views where appropriate. My applications vary from a few document records to several thousand.
Most of the Data Table tutorials etc seem to imply or recommend the use of REST Services for Data Tables. What is the reason for this when I can simply drop my existing Java Objects into Repeat Controls and then access the back end documents via links etc.
Sorry if this is not a coding question, but I am clearly missing something fundamental in my basic knowledge. Any advice would be appreciated.
The short version is that jQuery data tables are built by purely (CS)JS, meaning any "normal" transport of data like a REST service (such as how you're describing using xp:restService) is pretty standard and ubiquitous. jQuery itself has no knowledge directly of any underlying Java objects and doesn't care what backs the service.
If you were using an xp:repeat control you could bind to a backing List or other iterable collection from a backing Java class / bean. This would make far more sense if that's how you'll present the data. The logic shift is that specifically any time you update your xp:repeat, you must send an AJAX (XHR) wrapping around that xp:repeat tag, whereas a jQuery update from a REST service will get only the data response. There is some overhead to using AJAX to refresh part of the page (which literally is replacing part of the existing DOM with the newly fetched HTML and parsing the content), but at smaller scales, it's not a huge amount.
Using a REST service means that:
your front-end implementation will be more consistent with the majority of the rest of the web development industry
your back-end logic will be segregated, (ideally) making it more easy to port, migrate, or document
There's nothing wrong with implementing an xp:repeat (or friends) with backing Java on XPages, especially if you're using primarily XPages controls.
There are many ways to implement a RESTful service in XPages and the reasoning behind why to go for RESTful APIs in the XPages runtime is something both myself and many others have blogged about.

Liferay Portlet: How to generate service.xml (service builder) from existing database

I am new to liferay, Can anyone please suggest some way to generate the service.xml for existing database Discussion on Liferay Website . I hope people might have developed some way or liferay have developed some plugin for this.
I see no particular use in introducing servicebuilder to large existing databases: You can connect servicebuilder entities to "legacy datasources" or "legacy tables" (those make good search terms) but service.xml generation has not been done AFAIK.
Some problem with this approach are:
servicebuilder has certain assumptions about operations in a database. It's done to encapsulate all different databases that Liferay runs on, thus might not use every database to its fullest extent possible
If you have a large existing database, you probably have a lot of existing business logic to make sure correct data goes in and out of the database. You might even work with stored procedures etc.
While you can make servicebuilder work with stored procedures, you'd have to introduce custom sql to work around servicebuilder's assumptions. Same goes for explicit foreign key relationships etc.
My recommendation is to rather have a proper interface on the existing business logic, e.g. Webservice, JSON, Rest, whatever is popular. Then use this interface in Liferay's portlets.
Another option might be to bring the existing persistence code into Liferay and just expose services without making use of the persistence features of Servicebuilder. For this you'd just define empty <entity> blocks (with names etc). This will generate the appropriate DoSomethingLocalService, but omit the persistence implementation - and you can wire your existing code in these services.
You can go through below link to understand Service Builder in liferay
https://www.liferay.com/documentation/liferay-portal/6.0/development/-/ai/service-build-2
Also below link have sample service builder portlet
https://www.liferay.com/community/forums/-/message_boards/message/17609606
Hope it Helps !
Not done yet AFAIK. Since Liferay directly doesnot support all data properties of DB like foreign key, one to n mapping etc, it is a challenge to create the reverese engineering. But you can give a try.
Service Builder is generally a nice feature to create relatively small databases, and simple business Logic, while giving you the advantage that your tables will be auto-generated when you deploy your portlet, and having finders (search by X attribute) with no effort. If this is the case with your database, it will be much easier to create a new service.xml from scratch.
Other that that, I think that having an extended database in Liferay's Service Builder will introduce more problems and slow development while you're implementing a complex business Logic, create custom Finders whenever you need to query on a join of tables and so on. So it seems quite normal to me that a conversion of a database to Service Builder is not available.
In other words, if your database is too large to write it in service.xml, you shouldn't use Service Builder in the first place

UIDocument or UIManagedDocument when the application's backing store is both file & core data database

I am bit confused regarding which class I should inherit from. My application currently creates files in the "Documents" folder and also has Core Data based data models. These data models contains more information about the files.
Now I am thinking to migrate the app to the document architecture and thereby integrating with iCloud at one of time.
I have started to think in the direction of using both i.e. using UIDocument to manage the files and UIManagedDocument to manage the Core Data.
Would appreciate if someone could guide me.
It is perfectly acceptable to use both at the same time, as you said, for different purposes.
But consider, if those files are real documents and not just some data files of an internal implementation, I personally would not store any critical data about the documents separate from the documents though. Since documents from user perspective are meant to be self-sustaining - user may create, delete or move them around freely without fearing any interdependency with some other documents or objects. User expects all necessary meta-data to move with the document.
Then again if there is some "house-keeping" metadata that you can always re-create about your documents in the database, that is just fine.

Help understanding saving data please. Core data vs plist

Is every app that allows users to input data built with core data?
I've built a "grocery list" type of table view app where you name the list and then in a detail view add items to the list. Simple.
What I don't get is this, based on an iphone development book the example saves the data to a plist using dictionaries.
I've learned that it works on the simulator but not the device because the data is saved to the application bundle not the document directory (which was new to me!)
On the device the app works great except-it won't HOLD the data.
Is core data or sqlite the only solution?
Is every app that allows users to input data built with core data?
Note that your question as posed is incorrect, as it assumes that CoreData is tied to SQLite and is an alternative to plists.
CoreData is a framework for object lifecycle and graph management. It provides implementation of common tasks like changes tracking and propagation, consistency enforcement, data validation and so on.
The CoreData framework is a separate from the object persistence layer and can use different serialization implementations, including SQLite and XML (plists).
For more details, read Core Data Programming - Persistent Store Features.
The decision whether you should use CoreData should be based on whether you need any of the features it provides. If you need to serialize simple object graphs, without consistency requirements, you can use standard NSDictionary to serialize your data in a simple plist file in any of the application-writable folders. Otherwise, use CoreData, and choose the proper persistent store based on the type of data you will be storing.
From what I've seen around the internet, you can use Core Data (which gives you the options of SQLite, atomic, and XML), you can use NSKeyedArchivers and NSKeyedUnarchivers (http://www.vimeo.com/1454094) or you can store the data inside the local application folder (possibly using a serialization method). It looks like Core data is the best solution, but a more complex one to implement. For a simple app, as yours is, I think serializing data and storing it in the local app directory would be perfect.
I am surprised that your book is showing an example where user data is written to the app bundle. Actually, I'm a little surprised that that is even possible.
You should be able to write your data to an NSDictionary (or NSMutableDictionary) and then write that to your app's Documents directory, using -writeToFile:atomically:
Reading data back in should also be straightforward, using -initWithContentsOfFile:.
For someone just getting started, I would recommend keeping it simple. Working NSDictionary is very simple, though you have to manage things like the list of lists and how to name lists that are stored in Documents directory, etc.
Ultimately, using Core Data would probably be a better approach. It offers more flexibility and more power - but, as ever, those advantages come at a cost.
Your question is very important to the community in the respect that
you are asking a strategic question: which technology do I use, when?
Core Data is best for the day-to-day work of a list-based app. Core data is built to mirror the storage of data, similar to how databases work. Relational structures, sorting, key indexing and other row-based attributes are best supported by Core Data.
Property Lists (*.plist) is best suited to one-time updates to critical environmental settings. The user, for example, can optionally set .plist attributes through IOS Settings app. So passwords, account settings, email addresses, and configuration options can be set here nicely. This kind of data is very different from frequently-updated, transactional data.
XML Persistence is closely related to .plist, in that the property list (or .plist) is an xml file in itself. Hence, you could download a stream of xml data, then use it in your app using the same programming rubric as you would, adjusting a property list. Hence, receiving xml data from the web, or uploading such a list, maps nicely to xml persistence.
AWS also proposed the AWS-Persistence library, to support synchronizing your core data collections with their online databases. This could provide helpful by 1) having a user populate data locally via Core Data, then lazily/opportunistically uploading the list. For your purposes (grocery shopping list), this could provide immediacy to the user, while giving your server an interesting big-data opportunity (analyze user transactions, provide recommendations, sell ads, etc).
Hope this gets future visitors tapping into the wealth of what IOS provides -- peace!

Share a LotusScript library between databases

Is it possible to create a LotusScript library in one database and then access it from another database?
Without simply copying the library into each database that needs to use it.
What I would like to achieve is a single location where I can update the library and not have to manually copy it over to each database that is using it. I can't use a design template as the databases that use this script library all use different design templates.
I guess another solution would be to create an agent to copy the library out to all databases whenever it is updated. So if anyone has done anything like that before then I would also like to here about it.
Design inheritance in Lotus Notes isn't only on database level - individual design elements (such as your script library) can be explicitly inherited from a different template. See Linking individual design elements to a template.
With inheritance set up like this, the designer task on the Domino server will update the design element automatically. For this to work, the templates must be replicated to the same server.
You might want to disable this inheritance when you release your template, to avoid nasty surprises in the production environment. I created a solution for this a while ago: Remove Lotus Notes design element inheritance programatically.
Anders has answered the question very well. As Anders has already said, Domino, unfortunately, cannot share code libraries between databases. All the code is self contained, which is in this scenario a limitation.
Copying the agent into all the databases you want to use it for, and then employing design inheritance is a quick and easy way to distribute the agent.
An alternative idea, is to have a single database that serves as a repository of agents, so if you need to re-use the same agent over and over, it's design is always in one database, but you will need to design it so that it can perform operations on all the database you need to update.
Effectively, using each database as a datasource only, and the relevant agent(s) operating from one location. It will require some more work that will allow you to define some additional configuration documents that the agent(s) would use to identify which databases you want the agent to run on.
The advantages of this approach are :
You don't need to contend with design inheritance. It can get messy on a large scale when you have complex script library/design structures, you may have to buy third party tools to help you do this.
You can actually control which databases get updated via a series of configuration documents centrally with an "active/inactive" field that flags the database for update. Rather than directly "touching" the agents which requires you to get your hands dirty with enabling/disabling the agent. In some corporate environments which are tightly controlled, you need to keep asking the Notes admin to do this for you.
You can code the agent so that it reports activity in your own custom log documents when it runs on each database, and centrally store it.
Hope this provides you with some options...
You can share a lotusScript library between databases. export the script to a file with the suffix .lss and place it on the domino server in the domino folder. Then you can write "use "script.lss" " as with normal LotusScript libraries. You can see in the domino folder there are already some libraries eg. lsconst.lss

Resources