Share a LotusScript library between databases - shared-libraries

Is it possible to create a LotusScript library in one database and then access it from another database?
Without simply copying the library into each database that needs to use it.
What I would like to achieve is a single location where I can update the library and not have to manually copy it over to each database that is using it. I can't use a design template as the databases that use this script library all use different design templates.
I guess another solution would be to create an agent to copy the library out to all databases whenever it is updated. So if anyone has done anything like that before then I would also like to here about it.

Design inheritance in Lotus Notes isn't only on database level - individual design elements (such as your script library) can be explicitly inherited from a different template. See Linking individual design elements to a template.
With inheritance set up like this, the designer task on the Domino server will update the design element automatically. For this to work, the templates must be replicated to the same server.
You might want to disable this inheritance when you release your template, to avoid nasty surprises in the production environment. I created a solution for this a while ago: Remove Lotus Notes design element inheritance programatically.

Anders has answered the question very well. As Anders has already said, Domino, unfortunately, cannot share code libraries between databases. All the code is self contained, which is in this scenario a limitation.
Copying the agent into all the databases you want to use it for, and then employing design inheritance is a quick and easy way to distribute the agent.
An alternative idea, is to have a single database that serves as a repository of agents, so if you need to re-use the same agent over and over, it's design is always in one database, but you will need to design it so that it can perform operations on all the database you need to update.
Effectively, using each database as a datasource only, and the relevant agent(s) operating from one location. It will require some more work that will allow you to define some additional configuration documents that the agent(s) would use to identify which databases you want the agent to run on.
The advantages of this approach are :
You don't need to contend with design inheritance. It can get messy on a large scale when you have complex script library/design structures, you may have to buy third party tools to help you do this.
You can actually control which databases get updated via a series of configuration documents centrally with an "active/inactive" field that flags the database for update. Rather than directly "touching" the agents which requires you to get your hands dirty with enabling/disabling the agent. In some corporate environments which are tightly controlled, you need to keep asking the Notes admin to do this for you.
You can code the agent so that it reports activity in your own custom log documents when it runs on each database, and centrally store it.
Hope this provides you with some options...

You can share a lotusScript library between databases. export the script to a file with the suffix .lss and place it on the domino server in the domino folder. Then you can write "use "script.lss" " as with normal LotusScript libraries. You can see in the domino folder there are already some libraries eg. lsconst.lss

Related

How Does the Single Store Xpages work?

I have a number of XPAges design elements that I use in many different databases. If I read the wiki correctly the single store is an all or nothing situation.
So I want to create unique design in a database but use the set of reusable XPages element from a single store location. the wiki says:
Apart from the "dummy or blank XPage with the same name of the default XPage" in each instance application, does it matter if an 'instance' contains XPage design elements?
No. If SCXD is set on an application all XPages design elements are ignored on the database and the application uses the design elements on the SCXD database.
If this is the case then I have to create databases where probably 75% of the code is reusable but I would have to repeat it (and maintain it) in dozens of separate databases. pity!
XPages and related elements (Custom Controls, SSJS Libraries, Java Code) can be inherited from a specific template like other design elements. So, I would setup a database called, perhaps, "Core Components" (.ntf or .nsf) with a template name of "CoreComponents". Then on the individual elements in the target DB you would set inheritance to be specifically from the "CoreComponents" template. Then the elements that are unique to each database do not inherit from any template. You can then use File-Application-Refresh design to update the elements with specific inheritance and the one which are unique in that database will not get overwritten.
You do need to do a clean build after the refresh, so I recommend that you keep the Core Components database locally or on a different server than the others so that the daily design task will not update them resulting in corrupted xsp elements.
IBM's preferred model for reusing XPage artifacts across multiple applications is to create OSGi plugins that leverage the XPages Extensibility API.
NotesIn9 episode 64 demonstrates how to make an existing Custom Control design element a library component, which can then be used in any app that has the library available, instead of having to copy the design element to each app separately. Any subsequent changes to that component are then applied immediately to any apps that use it when a new version of the library is deployed.
If you truly have "dozens" of apps that all share certain features, but the entire design should not be identical across all of them, then the OSGi model is definitely the way to go.
But why not flip the entire model on its head? Traditionally, we've always put the code and the data in the same place (e.g. same NSF) because it was a pain to access -- and, especially, visually represent -- data in one NSF via code in another NSF. That's not true anymore. Why have dozens of apps just because the data lives in dozens of places? Any data source in XPages can be told where the data lives... you can link a central user interface to any number of "remote" data stores (either different NSFs on the same server, or even databases on other servers).
Red Pill, for instance, takes this to its logical extreme: they deploy one NSF, which acts as a portal to all your data, no matter where that data lives. The ACLs of the various NSFs (and Readers fields) still ensure that users don't pry into data they haven't been granted access to, and they have complex analytics algorithms for determining which data the users will actually care about. But if you have 500 NSFs in the domain, you're not maintaining 500 different code templates... it's literally just 1; but that one user interface is how users find, and interact with, all their data.
You certainly don't have to take this premise to that extreme, but perhaps you could identify, say, 5 apps where the UI and / or business logic is similar (or even identical), but the data just lives in multiple places. Create one central app for interacting with all of that data. Create a "homepage" that gives users a way to select which "app" they're trying to access (or, if they should only have access to one to begin with, compute which one that is), and then once they navigate in to the specific "app", just bind the data sources to the relevant NSF instead of assuming each view or document lives in the same NSF that the code does.
It's still a good idea to be aware of the Extensibility API, not only for the sake of code reusability, but also to understand just how much of the behavior of the platform truly is within our control now -- provided, of course, that we're willing to occasionally write some custom Java code. But if you shift away from the one-to-one mapping between code and data that we've habitually maintained in Domino for so long, I can practically guarantee that you'll prefer this approach... both for the ease of implementation and maintenance, and for the comparative simplicity it offers to end users.
You can combine the template technique and the all-code-in-one-database approach:
Divide the application design into two parts: a data part and a code part.
The data part contains all Notes views. If it's an classic Notes application it would contain also all design elements for Notes client like Forms, Subforms, Frames and so on.
The code part contains all XPages, Custom Controls, CSS, client/server JavaScript libraries, Themes, images, jars and so on.
Put your 75% common code into masterData.ntf and masterCode.ntf.
The application code databases appCodeX.ntf inherit all design elements of masterCode.ntf and contain the additional application specific design elements.
The code from all application templates gets united in allCode.ntf. It inherits all from masterCode.ntf and inherits the additional pieces of code from application templates.
Based on that you create an allCode.nsf.
On the data side you use the classic template way.
From here you have to possibilities:
You use Single Copy XPage Design - connect every appData database with allCode.nsf
You connect your XPages in allCode.nsf with appData databases
I prefer the latter. You can define in allCode.nsf where all the application data databases are located, e.g. in property documents.
With the approach showed in picture you're still able to separate application easily e.g. in case you want to sell them. You have already a separate template for every single application.

Liferay Portlet: How to generate service.xml (service builder) from existing database

I am new to liferay, Can anyone please suggest some way to generate the service.xml for existing database Discussion on Liferay Website . I hope people might have developed some way or liferay have developed some plugin for this.
I see no particular use in introducing servicebuilder to large existing databases: You can connect servicebuilder entities to "legacy datasources" or "legacy tables" (those make good search terms) but service.xml generation has not been done AFAIK.
Some problem with this approach are:
servicebuilder has certain assumptions about operations in a database. It's done to encapsulate all different databases that Liferay runs on, thus might not use every database to its fullest extent possible
If you have a large existing database, you probably have a lot of existing business logic to make sure correct data goes in and out of the database. You might even work with stored procedures etc.
While you can make servicebuilder work with stored procedures, you'd have to introduce custom sql to work around servicebuilder's assumptions. Same goes for explicit foreign key relationships etc.
My recommendation is to rather have a proper interface on the existing business logic, e.g. Webservice, JSON, Rest, whatever is popular. Then use this interface in Liferay's portlets.
Another option might be to bring the existing persistence code into Liferay and just expose services without making use of the persistence features of Servicebuilder. For this you'd just define empty <entity> blocks (with names etc). This will generate the appropriate DoSomethingLocalService, but omit the persistence implementation - and you can wire your existing code in these services.
You can go through below link to understand Service Builder in liferay
https://www.liferay.com/documentation/liferay-portal/6.0/development/-/ai/service-build-2
Also below link have sample service builder portlet
https://www.liferay.com/community/forums/-/message_boards/message/17609606
Hope it Helps !
Not done yet AFAIK. Since Liferay directly doesnot support all data properties of DB like foreign key, one to n mapping etc, it is a challenge to create the reverese engineering. But you can give a try.
Service Builder is generally a nice feature to create relatively small databases, and simple business Logic, while giving you the advantage that your tables will be auto-generated when you deploy your portlet, and having finders (search by X attribute) with no effort. If this is the case with your database, it will be much easier to create a new service.xml from scratch.
Other that that, I think that having an extended database in Liferay's Service Builder will introduce more problems and slow development while you're implementing a complex business Logic, create custom Finders whenever you need to query on a join of tables and so on. So it seems quite normal to me that a conversion of a database to Service Builder is not available.
In other words, if your database is too large to write it in service.xml, you shouldn't use Service Builder in the first place

Entity Framework migrations on legacy database

We have several legacy SQL Server databases that we occasionally make schema changes to. We currently have a utility written in C++ that allows users to update their DB's with these schema changes. The utility currently generates dynamic sql to create all DB objects. I am looking into redoing this and thought EF migrations might be a good way to go. I have read up a bit on the subject and I have a general idea of how it works. But I'm having a bit of a hard time figuring out how I would set it up to replace our current procedure (or if it is even possible). Currently, a client could be on any one of a number of previous versions. I'm assuming I would have to go back to the oldest possible version and create my model/initial migration from that, then generate incremental migrations for each version change in order to support updates from all versions. Is that a correct assumption? Also, currently our clients could be using sql server 2000, 2005, or 2008. Would this have any effect on how I would set things up (or if I even could)? Further, the goal is to create a utility with a (C# - probably WPF) UI that the user can use to manipulate the migrations (up or down, preferably). I've seen a lot of examples of how to manipulate migrations from command-line within package manager but not a lot of stuff on how to create a utility with a friendly UI for upgrading/downgrading DB's in production. Also, I have not seen anything that shows how to create stored procedures in a migration (our DBs rely on some stored procedures). I'm assuming that, if nothing else, I can use the Sql() method to generate a SQL query to create a SP. Is that correct? Is there a better way?
I know my questions are a bit non-specific and I apologize for that. But I'm still in the beginning processes of learning this and I'd like to get an idea of whether or not this is a good way to go. Any guidance would be greatly appreciated.
Thanks,
Dennis
Firstly, on SQL Server support, Entity Framework doesn't really support SQL Server 2000. See this question:
EntityFramework SQL Server 2000?
On the question of supporting all the multiple versions, you have the right idea about needing to generate an initial migration for the oldest version first then incrementally altering the model and generating migrations to support the later versions. This will be a pain as the migrations are opinionated about how they represent the model in the database and you will be doing a lot of messing about to end up with a model and a set of migrations that fully represent that. Specific concerns are indexes, column lengths, data types, stored procedures, triggers, functions, partitioning.
The Sql() function gets you around most issues, though also helpful in the migrations are functions like CreateIndex and AlterColumn.
For automating this, the migrations are definitely available as powershell cmdlets which are themselves just .Net objects so can be called programmatically.
As this question is a year old, I assume you will have made a decision on whether to do this. My opinion is that it is hard to see that it's worth the effort. If you were re-platforming the code base that uses this database to Entity Framework then it would make sense. Otherwise there are bound to be better tools out there for database version management. My first port of call would be Redgate.

Subsonic - let customers switch the database

I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.

Help understanding saving data please. Core data vs plist

Is every app that allows users to input data built with core data?
I've built a "grocery list" type of table view app where you name the list and then in a detail view add items to the list. Simple.
What I don't get is this, based on an iphone development book the example saves the data to a plist using dictionaries.
I've learned that it works on the simulator but not the device because the data is saved to the application bundle not the document directory (which was new to me!)
On the device the app works great except-it won't HOLD the data.
Is core data or sqlite the only solution?
Is every app that allows users to input data built with core data?
Note that your question as posed is incorrect, as it assumes that CoreData is tied to SQLite and is an alternative to plists.
CoreData is a framework for object lifecycle and graph management. It provides implementation of common tasks like changes tracking and propagation, consistency enforcement, data validation and so on.
The CoreData framework is a separate from the object persistence layer and can use different serialization implementations, including SQLite and XML (plists).
For more details, read Core Data Programming - Persistent Store Features.
The decision whether you should use CoreData should be based on whether you need any of the features it provides. If you need to serialize simple object graphs, without consistency requirements, you can use standard NSDictionary to serialize your data in a simple plist file in any of the application-writable folders. Otherwise, use CoreData, and choose the proper persistent store based on the type of data you will be storing.
From what I've seen around the internet, you can use Core Data (which gives you the options of SQLite, atomic, and XML), you can use NSKeyedArchivers and NSKeyedUnarchivers (http://www.vimeo.com/1454094) or you can store the data inside the local application folder (possibly using a serialization method). It looks like Core data is the best solution, but a more complex one to implement. For a simple app, as yours is, I think serializing data and storing it in the local app directory would be perfect.
I am surprised that your book is showing an example where user data is written to the app bundle. Actually, I'm a little surprised that that is even possible.
You should be able to write your data to an NSDictionary (or NSMutableDictionary) and then write that to your app's Documents directory, using -writeToFile:atomically:
Reading data back in should also be straightforward, using -initWithContentsOfFile:.
For someone just getting started, I would recommend keeping it simple. Working NSDictionary is very simple, though you have to manage things like the list of lists and how to name lists that are stored in Documents directory, etc.
Ultimately, using Core Data would probably be a better approach. It offers more flexibility and more power - but, as ever, those advantages come at a cost.
Your question is very important to the community in the respect that
you are asking a strategic question: which technology do I use, when?
Core Data is best for the day-to-day work of a list-based app. Core data is built to mirror the storage of data, similar to how databases work. Relational structures, sorting, key indexing and other row-based attributes are best supported by Core Data.
Property Lists (*.plist) is best suited to one-time updates to critical environmental settings. The user, for example, can optionally set .plist attributes through IOS Settings app. So passwords, account settings, email addresses, and configuration options can be set here nicely. This kind of data is very different from frequently-updated, transactional data.
XML Persistence is closely related to .plist, in that the property list (or .plist) is an xml file in itself. Hence, you could download a stream of xml data, then use it in your app using the same programming rubric as you would, adjusting a property list. Hence, receiving xml data from the web, or uploading such a list, maps nicely to xml persistence.
AWS also proposed the AWS-Persistence library, to support synchronizing your core data collections with their online databases. This could provide helpful by 1) having a user populate data locally via Core Data, then lazily/opportunistically uploading the list. For your purposes (grocery shopping list), this could provide immediacy to the user, while giving your server an interesting big-data opportunity (analyze user transactions, provide recommendations, sell ads, etc).
Hope this gets future visitors tapping into the wealth of what IOS provides -- peace!

Resources